The CSIQ database consists of 30 original images, each is distorted using six different types of distortions at four to five different levels of distortion. CSIQ images are subjectively rated base on a linear displacement of the images across four calibrated LCD monitors placed side by side with equal viewing distance to the observer. The database contains 5000 subjective ratings from 35 different observers, and ratings are reported in the form of DMOS.
102 PAPERS • 1 BENCHMARK
KonIQ-10k is a large-scale IQA dataset consisting of 10,073 quality scored images. This is the first in-the-wild database aiming for ecological validity, with regard to the authenticity of distortions, the diversity of content, and quality-related indicators. Through the use of crowdsourcing, we obtained 1.2 million reliable quality ratings from 1,459 crowd workers, paving the way for more general IQA models.
87 PAPERS • 1 BENCHMARK
Our dataset was made of videos from MSU Video Upscalers Benchmark Dataset, MSU Video Super-Resolution Benchmark Dataset and MSU Super-Resolution for Video Compression Benchmark Dataset. Dataset consists of real videos (were filmed with 2 cameras), video games footages, movies, cartoons, dynamic ads.
26 PAPERS • 1 BENCHMARK
Dataset with 28,792 retinal images from the EyePACS dataset, based on a three-level quality grading system (i.e., Good',Usable' and `Reject') for evaluating RIQA methods.
22 PAPERS • NO BENCHMARKS YET
The dataset was created for video quality assessment problem. It was formed with 36 clips from Vimeo, which were selected from 18,000+ open-source clips with high bitrate (license CCBY or CC0).
20 PAPERS • 2 BENCHMARKS
18 PAPERS • 2 BENCHMARKS
TID2013 is a dataset for image quality assessment that contains 25 reference images and 3000 distorted images (25 reference images x 24 types of distortions x 5 levels of distortions).
17 PAPERS • 1 BENCHMARK
Includes more than two million traffic sign images that are based on real-world and simulator data.
14 PAPERS • NO BENCHMARKS YET
Year after year, the demand for ever-better smartphone photos continues to grow, in particular in the domain of portrait photography. Manufacturers thus use perceptual quality criteria throughout the development of smartphone cameras. This costly procedure can be partially replaced by automated learning-based methods for image quality assessment (IQA). Due to its subjective nature, it is necessary to estimate and guarantee the consistency of the IQA process, a characteristic lacking in the mean opinion scores (MOS) widely used for crowdsourcing IQA. In addition, existing blind IQA (BIQA) datasets pay little attention to the difficulty of cross-content assessment, which may degrade the quality of annotations. This paper introduces PIQ23, a portrait-specific IQA dataset of 5116 images of 50 predefined scenarios acquired by 100 smartphones, covering a high variety of brands, models, and use cases. The dataset includes individuals of various genders and ethnicities who have given explicit
4 PAPERS • NO BENCHMARKS YET
Hephaestus is the first large-scale InSAR dataset. Driven by volcanic unrest detection, it provides 19,919 unique satellite frames annotated with a diverse set of labels. Moreover, each sample is accompanied by a textual description of its contents. The goal of this dataset is to boost research on exploitation of interferometric data enabling the application of state-of-the-art computer vision+NLP methods. Furthermore, the annotated dataset is bundled with a large archive of unlabeled frames to enable large-scale self-supervised learning. The final size of the dataset amounts to 110,573 interferograms.
2 PAPERS • NO BENCHMARKS YET
Cross-Reference Omnidirectional Stitching IQA is a novel omnidirectional image dataset containing stitched images as well as dual-fisheye images captured from standard quarters of 0◦, 90◦ , 180◦ and 270◦. In this manner, when evaluating the quality of an image stitched from a pair of fisheye images (e.g., 0◦ and 180◦), the other pair of fisheye images (e.g., 90◦ and 270◦) can be used as the cross-reference to provide ground-truth observations of the stitching regions.
1 PAPER • NO BENCHMARKS YET
The Fraunhofer Portugal AICOS EDoF Dataset was produced within the TAMI project and is composed of images of microscopic fields of view (FOV) of Liquid-based Cervical Cytology (LBC) samples. A total of 15 LBC samples were supplied by the Pathology Services from Hospital Fernando Fonseca and the Portuguese Oncology Institute of Porto. For each LBC sample, a set of images were obtained using a version of µSmartScope [1,2] prototype adapted to the cervical cytology use case [3,4].