The GoPro dataset for deblurring consists of 3,214 blurred images with the size of 1,280×720 that are divided into 2,103 training images and 1,111 test images. The dataset consists of pairs of a realistic blurry image and the corresponding ground truth shapr image that are obtained by a high-speed camera.
305 PAPERS • 3 BENCHMARKS
The dataset consists of 4,738 pairs of images of 232 different scenes including reference pairs. All images were captured both in the camera raw and JPEG formats, hence generating two datasets: RealBlur-R from the raw images, and RealBlur-J from the JPEG images. Each training set consists of 3,758 image pairs, while each test set consists of 980 image pairs.
80 PAPERS • 4 BENCHMARKS
Consists of 8,422 blurry and sharp image pairs with 65,784 densely annotated FG human bounding boxes.
68 PAPERS • 4 BENCHMARKS
The Dialog State Tracking Challenges 2 & 3 (DSTC2&3) were research challenge focused on improving the state of the art in tracking the state of spoken dialog systems. State tracking, sometimes called belief tracking, refers to accurately estimating the user's goal as a dialog progresses. Accurate state tracking is desirable because it provides robustness to errors in speech recognition, and helps reduce ambiguity inherent in language within a temporal process like dialog. In these challenges, participants were given labelled corpora of dialogs to develop state tracking algorithms. The trackers were then evaluated on a common set of held-out dialogs, which were released, un-labelled, during a one week period.
29 PAPERS • 2 BENCHMARKS
This dataset focus on two blur types: camera motion blur and defocus blur. For each type of blur we synthesize $5$ scenes using Blender. We manually place multi-view cameras to mimic real data capture. To render images with camera motion blur, we randomly perturb the camera pose, and then linearly interpolate poses between the original and perturbed poses for each view. We render images from interpolated poses and blend them in linear RGB space to generate the final blurry images. For defocus blur, we use the built-in functionality to render depth-of-field images. We fix the aperture and randomly choose a focus plane between the nearest and furthest depth.
22 PAPERS • NO BENCHMARKS YET
The RSBlur dataset provides pairs of real and synthetic blurred images with ground truth sharp images. The dataset enables the evaluation of deblurring methods and blur synthesis methods on real-world blurred images. Training, validation, and test sets consist of 8,878, 1,120, and 3,360 blurred images, respectively.
14 PAPERS • 2 BENCHMARKS
QMUL-SurvFace is a surveillance face recognition benchmark that contains 463,507 face images of 15,573 distinct identities captured in real-world uncooperative surveillance scenes over wide space and time.
10 PAPERS • 1 BENCHMARK
The realistic and dynamic scenes (REDS) dataset was proposed in the NTIRE19 Challenge. The dataset is composed of 300 video sequences with resolution of 720×1,280, and each video has 100 frames, where the training set, the validation set and the testing set have 240, 30 and 30 videos, respectively
10 PAPERS • 2 BENCHMARKS
Qualitative dataset with real blurred videos, created by using beam-splitter setup in lab environment
7 PAPERS • 1 BENCHMARK
A dataset of over 65,000 pairs of incorrectly white-balanced images and their corresponding correctly white-balanced images.
7 PAPERS • NO BENCHMARKS YET
A large-scale multi-scene dataset for stereo deblurring, containing 20,637 blurry-sharp stereo image pairs from 135 diverse sequences and their corresponding bidirectional disparities.
2 PAPERS • NO BENCHMARKS YET
This dataset consists of blurred, noisy and defocused images.
1 PAPER • NO BENCHMARKS YET
Contains three difficult real-world scenarios: uncontrolled videos taken by UAVs and manned gliders, as well as controlled videos taken on the ground. Over 160,000 annotated frames forhundreds of ImageNet classes are available, which are used for baseline experiments that assess the impact of known and unknown image artifacts and other conditions on common deep learning-based object classification approaches.
YorkTag provides pairs of sharp/blurred images containing fiducial markers and is proposed to train and qualitatively and quantitatively evaluate our model.