Blind Face Restoration
24 papers with code • 4 benchmarks • 4 datasets
Blind face restoration aims at recovering high-quality faces from the low-quality counterparts suffering from unknown degradation, such as low-resolution, noise, blur, compression artifacts, etc. When applied to real-world scenarios, it becomes more challenging, due to more complicated degradation, diverse poses and expressions.
Description source: Towards Real-World Blind Face Restoration with Generative Facial Prior
Image source: Towards Real-World Blind Face Restoration with Generative Facial Prior
Libraries
Use these libraries to find Blind Face Restoration models and implementationsMost implemented papers
DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better
We present a new end-to-end generative adversarial network (GAN) for single image motion deblurring, named DeblurGAN-v2, which considerably boosts state-of-the-art deblurring efficiency, quality, and flexibility.
HiFaceGAN: Face Renovation via Collaborative Suppression and Replenishment
Existing face restoration researches typically relies on either the degradation prior or explicit guidance labels for training, which often results in limited generalization ability over real-world images with heterogeneous degradations and rich background contents.
GAN Prior Embedded Network for Blind Face Restoration in the Wild
The proposed GAN prior embedded network (GPEN) is easy-to-implement, and it can generate visually photo-realistic results.
Blind Face Restoration: Benchmark Datasets and a Baseline Model
To address this problem, we first synthesize two blind face restoration benchmark datasets called EDFace-Celeb-1M (BFR128) and EDFace-Celeb-150K (BFR512).
DifFace: Blind Face Restoration with Diffused Error Contraction
Moreover, the transition distribution can contract the error of the restoration backbone and thus makes our method more robust to unknown degradations.
Learning Warped Guidance for Blind Face Restoration
For better recovery of fine facial details, we modify the problem setting by taking both the degraded observation and a high-quality guided image of the same identity as input to our guided face restoration network (GFRNet).
Image Processing Using Multi-Code GAN Prior
Such an over-parameterization of the latent space significantly improves the image reconstruction quality, outperforming existing competitors.
Enhanced Blind Face Restoration With Multi-Exemplar Images and Adaptive Spatial Feature Fusion
First, given a degraded observation, we select the optimal guidance based on the weighted affine distance on landmark sets, where the landmark weights are learned to make the guidance image optimized to HQ image reconstruction.
Blind Face Restoration via Deep Multi-scale Component Dictionaries
Next, with the degraded input, we match and select the most similar component features from their corresponding dictionaries and transfer the high-quality details to the input via the proposed dictionary feature transfer (DFT) block.
Progressive Semantic-Aware Style Transformation for Blind Face Restoration
Compared with previous networks, the proposed PSFR-GAN makes full use of the semantic (parsing maps) and pixel (LQ images) space information from different scales of input pairs.