No-Reference Image Quality Assessment
53 papers with code • 5 benchmarks • 5 datasets
An Image Quality Assessment approach where no reference image information is available to the model. Sometimes referred to as Blind Image Quality Assessment (BIQA).
Most implemented papers
SER-FIQ: Unsupervised Estimation of Face Image Quality Based on Stochastic Embedding Robustness
Face image quality is an important factor to enable high performance face recognition systems.
RankIQA: Learning from Rankings for No-reference Image Quality Assessment
Furthermore, on the LIVE benchmark we show that our approach is superior to existing NR-IQA techniques and that we even outperform the state-of-the-art in full-reference IQA (FR-IQA) methods without having to resort to high-quality reference images to infer IQA.
No-Reference Quality Assessment of Contrast-Distorted Images using Contrast Enhancement
No-reference image quality assessment (NR-IQA) aims to measure the image quality without reference image.
Image Quality Assessment using Contrastive Learning
We consider the problem of obtaining image quality representations in a self-supervised manner.
MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment
No-Reference Image Quality Assessment (NR-IQA) aims to assess the perceptual quality of images in accordance with human subjective perception.
Re-IQA: Unsupervised Learning for Image Quality Assessment in the Wild
To advance research in this field, we propose a Mixture of Experts approach to train two separate encoders to learn high-level content and low-level image quality features in an unsupervised setting.
Deep learning techniques for blind image super-resolution: A high-scale multi-domain perspective evaluation
Two no-reference metrics were selected, being the classical natural image quality evaluator (NIQE) and the recent transformer-based multi-dimension attention network for no-reference image quality assessment (MANIQA) score, to assess the techniques.
No-Reference Image Quality Assessment in the Spatial Domain
We propose a natural scene statistic-based distortion-generic blind/no-reference (NR) image quality assessment (IQA) model that operates in the spatial domain.
Which Has Better Visual Quality: The Clear Blue Sky or a Blurry Animal?
The proposed method, SFA, is compared with nine representative blur-specific NR-IQA methods, two general-purpose NR-IQA methods, and two extra full-reference IQA methods on Gaussian blur images (with and without Gaussian noise/JPEG compression) and realistic blur images from multiple databases, including LIVE, TID2008, TID2013, MLIVE1, MLIVE2, BID, and CLIVE.
Exploiting High-Level Semantics for No-Reference Image Quality Assessment of Realistic Blur Images
To guarantee a satisfying Quality of Experience (QoE) for consumers, it is required to measure image quality efficiently and reliably.