Source-Free Domain Adaptation
64 papers with code • 3 benchmarks • 3 datasets
Libraries
Use these libraries to find Source-Free Domain Adaptation models and implementationsMost implemented papers
Concurrent Subsidiary Supervision for Unsupervised Source-Free Domain Adaptation
The prime challenge in unsupervised domain adaptation (DA) is to mitigate the domain shift between the source and target domains.
Upcycling Models under Domain and Category Shift
We examine the superiority of our GLC on multiple benchmarks with different category shift scenarios, including partial-set, open-set, and open-partial-set DA.
Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation
Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Tent: Fully Test-time Adaptation by Entropy Minimization
A model must adapt itself to generalize to new and different data during testing.
Casting a BAIT for Offline and Online Source-free Domain Adaptation
When adapting to the target domain, the additional classifier initialized from source classifier is expected to find misclassified features.
SS-SFDA : Self-Supervised Source-Free Domain Adaptation for Road Segmentation in Hazardous Environments
We present a novel approach for unsupervised road segmentation in adverse weather conditions such as rain or fog.
Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation
In this paper, we address the challenging source-free domain adaptation (SFDA) problem, where the source pretrained model is adapted to the target domain in the absence of source data.
ProxyMix: Proxy-based Mixup Training with Label Refinery for Source-Free Domain Adaptation
First of all, to avoid additional parameters and explore the information in the source model, ProxyMix defines the weights of the classifier as the class prototypes and then constructs a class-balanced proxy source domain by the nearest neighbors of the prototypes to bridge the unseen source domain and the target domain.
Divide and Contrast: Source-free Domain Adaptation via Adaptive Contrastive Learning
We investigate a practical domain adaptation task, called source-free domain adaptation (SFUDA), where the source-pretrained model is adapted to the target domain without access to the source data.
Towards Source-free Domain Adaptive Semantic Segmentation via Importance-aware and Prototype-contrast Learning
It utilizes a well-trained source model and unlabeled target data to achieve adaptation in the target domain.