Unsupervised Image-To-Image Translation
69 papers with code • 2 benchmarks • 2 datasets
Unsupervised image-to-image translation is the task of doing image-to-image translation without ground truth image-to-image pairings.
( Image credit: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks )
Libraries
Use these libraries to find Unsupervised Image-To-Image Translation models and implementationsMost implemented papers
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs.
U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation
We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner.
Unsupervised Domain Adaptation by Backpropagation
Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary).
Adversarial Discriminative Domain Adaptation
Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains.
Multimodal Unsupervised Image-to-Image Translation
To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain.
Few-Shot Unsupervised Image-to-Image Translation
Unsupervised image-to-image translation methods learn to map images in a given class to an analogous image in a different class, drawing on unstructured (non-registered) datasets of images.
Unsupervised Image-to-Image Translation Networks
Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains.
Attention-Guided Generative Adversarial Networks for Unsupervised Image-to-Image Translation
To handle the limitation, in this paper we propose a novel Attention-Guided Generative Adversarial Network (AGGAN), which can detect the most discriminative semantic object and minimize changes of unwanted part for semantic manipulation problems without using extra data and models.
Unsupervised Cross-Domain Image Generation
We study the problem of transferring a sample in one domain to an analog sample in another domain.
XGAN: Unsupervised Image-to-Image Translation for Many-to-Many Mappings
Style transfer usually refers to the task of applying color and texture information from a specific style image to a given content image while preserving the structure of the latter.