Conditional Image Generation
133 papers with code • 10 benchmarks • 8 datasets
Conditional image generation is the task of generating new images from a dataset conditional on their class.
( Image credit: PixelCNN++ )
Libraries
Use these libraries to find Conditional Image Generation models and implementationsDatasets
Most implemented papers
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications.
Analyzing and Improving the Image Quality of StyleGAN
Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.
Improved Training of Wasserstein GANs
Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability.
Self-Attention Generative Adversarial Networks
In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks.
Improved Techniques for Training GANs
We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework.
Conditional Image Synthesis With Auxiliary Classifier GANs
We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models.
Large Scale GAN Training for High Fidelity Natural Image Synthesis
Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal.
Training Generative Adversarial Networks with Limited Data
We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5. 59 to 2. 42.
High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs).
Diffusion Models Beat GANs on Image Synthesis
Finally, we find that classifier guidance combines well with upsampling diffusion models, further improving FID to 3. 94 on ImageNet 256$\times$256 and 3. 85 on ImageNet 512$\times$512.