cross-domain few-shot learning
31 papers with code • 1 benchmarks • 1 datasets
Its essence is transfer learning. The model needs to be trained in the source domain and then migrated to the target domain. Compliant with (1) the category in the target domain has never appeared in the source domain (2) the data distribution of the target domain is inconsistent with the source domain (3) each class in the target domain has very few labels
Most implemented papers
Cross-domain Few-shot Learning with Task-specific Adapters
In this paper, we look at the problem of cross-domain few-shot classification that aims to learn a classifier from previously unseen classes and domains with few labeled samples.
Self-Supervision Can Be a Good Few-Shot Learner
Specifically, we maximize the mutual information (MI) of instances and their representations with a low-bias MI estimator to perform self-supervised pre-training.
Self-Supervised Learning For Few-Shot Image Classification
In this paper, we proposed to train a more generalized embedding network with self-supervised learning (SSL) which can provide robust representation for downstream tasks by learning from the data itself.
A Broader Study of Cross-Domain Few-Shot Learning
Extensive experiments on the proposed benchmark are performed to evaluate state-of-art meta-learning approaches, transfer learning approaches, and newer methods for cross-domain few-shot learning.
Cross-Domain Few-Shot Learning by Representation Fusion
On the few-shot datasets miniImagenet and tieredImagenet with small domain shifts, CHEF is competitive with state-of-the-art methods.
Shallow Bayesian Meta Learning for Real-World Few-Shot Recognition
Current state-of-the-art few-shot learners focus on developing effective training procedures for feature representations, before using simple, e. g. nearest centroid, classifiers.
Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty
This data enables self-supervised pre-training on the target domain, in addition to supervised pre-training on the source domain.
Universal Representations: A Unified Look at Multiple Task and Domain Learning
We propose a unified look at jointly learning multiple vision tasks and visual domains through universal representations, a single deep neural network.
StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot Learning
Thus, inspired by vanilla adversarial learning, a novel model-agnostic meta Style Adversarial training (StyleAdv) method together with a novel style adversarial attack method is proposed for CD-FSL.
A Transductive Multi-Head Model for Cross-Domain Few-Shot Learning
The TMHFS method extends the Meta-Confidence Transduction (MCT) and Dense Feature-Matching Networks (DFMN) method [2] by introducing a new prediction head, i. e, an instance-wise global classification network based on semantic information, after the common feature embedding network.