Multimodal Unsupervised Image-To-Image Translation
14 papers with code • 6 benchmarks • 4 datasets
Multimodal unsupervised image-to-image translation is the task of producing multiple translations to one domain from a single image in another domain.
( Image credit: MUNIT: Multimodal UNsupervised Image-to-image Translation )
Libraries
Use these libraries to find Multimodal Unsupervised Image-To-Image Translation models and implementationsMost implemented papers
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs.
Multimodal Unsupervised Image-to-Image Translation
To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain.
StarGAN v2: Diverse Image Synthesis for Multiple Domains
A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains.
Unsupervised Image-to-Image Translation Networks
Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains.
Diverse Image-to-Image Translation via Disentangled Representations
Our model takes the encoded content features extracted from a given input and the attribute vectors sampled from the attribute space to produce diverse outputs at test time.
Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis
In this work, we propose a simple yet effective regularization term to address the mode collapse issue for cGANs.
Lifespan Age Transformation Synthesis
Most existing aging methods are limited to changing the texture, overlooking transformations in head shape that occur during the human aging and growth process.
In2I : Unsupervised Multi-Image-to-Image Translation Using Generative Adversarial Networks
In unsupervised image-to-image translation, the goal is to learn the mapping between an input image and an output image using a set of unpaired training images.
Breaking the cycle -- Colleagues are all you need
(2) Since it does not need to support the cycle constraint, no irrelevant traces of the input are left on the generated image.
High-Resolution Daytime Translation Without Domain Labels
We present the high-resolution daytime translation (HiDT) model for this task.