Face Reenactment
24 papers with code • 0 benchmarks • 1 datasets
Face Reenactment is an emerging conditional face synthesis task that aims at fulfilling two goals simultaneously: 1) transfer a source face shape to a target face; while 2) preserve the appearance and the identity of the target face.
Source: One-shot Face Reenactment
Benchmarks
These leaderboards are used to track progress in Face Reenactment
Most implemented papers
APB2Face: Audio-guided face reenactment with auxiliary pose and blink signals
Audio-guided face reenactment aims at generating photorealistic faces using audio information while maintaining the same facial movement as when speaking to a real person.
One-shot Face Reenactment
However, in real-world scenario end-users often only have one target face at hand, rendering existing methods inapplicable.
AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation
In this study, we propose AniPortrait, a novel framework for generating high-quality animation driven by audio and a reference portrait image.
ReenactGAN: Learning to Reenact Faces via Boundary Transfer
A transformer is subsequently used to adapt the boundary of source face to the boundary of target face.
ICface: Interpretable and Controllable Face Reenactment Using GANs
This paper presents a generic face animator that is able to control the pose and expressions of a given face image.
FReeNet: Multi-Identity Face Reenactment
This paper presents a novel multi-identity face reenactment framework, named FReeNet, to transfer facial expressions from an arbitrary source face to a target face with a shared model.
FSGAN: Subject Agnostic Face Swapping and Reenactment
We present Face Swapping GAN (FSGAN) for face swapping and reenactment.
SMILE: Semantically-guided Multi-attribute Image and Layout Editing
Additionally, our method is capable of adding, removing or changing either fine-grained or coarse attributes by using an image as a reference or by exploring the style distribution space, and it can be easily extended to head-swapping and face-reenactment applications without being trained on videos.
APB2FaceV2: Real-Time Audio-Guided Multi-Face Reenactment
Audio-guided face reenactment aims to generate a photorealistic face that has matched facial expression with the input audio.
Everything's Talkin': Pareidolia Face Reenactment
We present a new application direction named Pareidolia Face Reenactment, which is defined as animating a static illusory face to move in tandem with a human face in the video.