Text-based Image Editing

21 papers with code • 1 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Text-based Image Editing models and implementations
2 papers
22,504

Most implemented papers

Prompt-to-Prompt Image Editing with Cross Attention Control

google/prompt-to-prompt 2 Aug 2022

Editing is challenging for these generative models, since an innate property of an editing technique is to preserve most of the original image, while in the text-based models, even a small modification of the text prompt often leads to a completely different outcome.

InstructPix2Pix: Learning to Follow Image Editing Instructions

timothybrooks/instruct-pix2pix CVPR 2023

We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image.

Null-text Inversion for Editing Real Images using Guided Diffusion Models

google/prompt-to-prompt CVPR 2023

Our Null-text inversion, based on the publicly available Stable Diffusion model, is extensively evaluated on a variety of images and prompt editing, showing high-fidelity editing of real images.

Versatile Diffusion: Text, Images and Variations All in One Diffusion Model

shi-labs/versatile-diffusion ICCV 2023

In this work, we expand the existing single-flow diffusion pipeline into a multi-task multimodal network, dubbed Versatile Diffusion (VD), that handles multiple flows of text-to-image, image-to-text, and variations in one unified model.

Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation

MichalGeyer/plug-and-play CVPR 2023

Large-scale text-to-image generative models have been a revolutionary breakthrough in the evolution of generative AI, allowing us to synthesize diverse images that convey highly complex visual concepts.

MasaCtrl: Tuning-Free Mutual Self-Attention Control for Consistent Image Synthesis and Editing

tencentarc/masactrl ICCV 2023

Despite the success in large-scale text-to-image generation and text-conditioned image editing, existing methods still struggle to produce consistent generation and editing results.

EDICT: Exact Diffusion Inversion via Coupled Transformations

salesforce/edict CVPR 2023

EDICT enables mathematically exact inversion of real and model-generated images by maintaining two coupled noise vectors which are used to invert each other in an alternating fashion.

Zero-shot Image-to-Image Translation

pix2pixzero/pix2pix-zero 6 Feb 2023

However, it is still challenging to directly apply these models for editing real images for two reasons.

Erasing Concepts from Diffusion Models

rohitgandikota/erasing ICCV 2023

We propose a fine-tuning method that can erase a visual concept from a pre-trained diffusion model, given only the name of the style and using negative guidance as a teacher.

LANCE: Stress-testing Visual Models by Generating Language-guided Counterfactual Images

virajprabhu/lance NeurIPS 2023

We propose an automated algorithm to stress-test a trained visual model by generating language-guided counterfactual test images (LANCE).