Data-free Knowledge Distillation
25 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Data-free Knowledge Distillation
Most implemented papers
Data-Free Knowledge Distillation for Heterogeneous Federated Learning
Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data.
Contrastive Model Inversion for Data-Free Knowledge Distillation
In this paper, we propose Contrastive Model Inversion~(CMI), where the data diversity is explicitly modeled as an optimizable objective, to alleviate the mode collapse issue.
Data-Free Knowledge Distillation for Deep Neural Networks
Recent advances in model compression have provided procedures for compressing large neural networks to a fraction of their original size while retaining most if not all of their accuracy.
Up to 100$\times$ Faster Data-free Knowledge Distillation
At the heart of our approach is a novel strategy to reuse the shared common features in training data so as to synthesize different data instances.
DAD++: Improved Data-free Test Time Adversarial Defense
With the increasing deployment of deep neural networks in safety-critical applications such as self-driving cars, medical imaging, anomaly detection, etc., adversarial robustness has become a crucial concern in the reliability of these networks in real-world scenarios.
Knowledge Extraction with No Observable Data
Knowledge distillation is to transfer the knowledge of a large neural network into a smaller one and has been shown to be effective especially when the amount of training data is limited or the size of the student model is very small.
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
The effectiveness of such attacks relies heavily on the availability of data necessary to query the target model.
Robustness and Diversity Seeking Data-Free Knowledge Distillation
Knowledge distillation (KD) has enabled remarkable progress in model compression and knowledge transfer.
Training Generative Adversarial Networks in One Stage
Based on the adversarial losses of the generator and discriminator, we categorize GANs into two classes, Symmetric GANs and Asymmetric GANs, and introduce a novel gradient decomposition method to unify the two, allowing us to train both classes in one stage and hence alleviate the training effort.
Towards Data-Free Domain Generalization
In particular, we address the question: How can knowledge contained in models trained on different source domains be merged into a single model that generalizes well to unseen target domains, in the absence of source and target domain data?