Indian Pines is a Hyperspectral image segmentation dataset. The input data consists of hyperspectral bands over a single landscape in Indiana, US, (Indian Pines data set) with 145×145 pixels. For each pixel, the data set contains 220 spectral reflectance bands which represent different portions of the electromagnetic spectrum in the wavelength range 0.4−2.5⋅10−6.
78 PAPERS • 2 BENCHMARKS
BigEarthNet consists of 590,326 Sentinel-2 image patches, each of which is a section of i) 120x120 pixels for 10m bands; ii) 60x60 pixels for 20m bands; and iii) 20x20 pixels for 60m bands.
64 PAPERS • 3 BENCHMARKS
The Pavia University dataset is a hyperspectral image dataset which gathered by a sensor known as the reflective optics system imaging spectrometer (ROSIS-3) over the city of Pavia, Italy. The image consists of 610×340 pixels with 115 spectral bands. The image is divided into 9 classes with a total of 42,776 labelled samples, including the asphalt, meadows, gravel, trees, metal sheet, bare soil, bitumen, brick, and shadow.
39 PAPERS • 1 BENCHMARK
Kennedy Space Center is a dataset for the classification of wetland vegetation at the Kennedy Space Center, Florida using hyperspectral imagery. Hyperspectral data were acquired over KSC on March 23, 1996 using JPL's Airborne Visible/Infrared Imaging Spectrometer.
16 PAPERS • 1 BENCHMARK
Salinas Scene is a hyperspectral dataset collected by the 224-band AVIRIS sensor over Salinas Valley, California, and is characterized by high spatial resolution (3.7-meter pixels). The area covered comprises 512 lines by 217 samples. 20 water absorption bands were discarder: [108-112], [154-167], 224. This image was available only as at-sensor radiance data. It includes vegetables, bare soils, and vineyard fields. Salinas groundtruth contains 16 classes.
12 PAPERS • 3 BENCHMARKS
SEN12MS-CR is a multi-modal and mono-temporal data set for cloud removal. It contains observations covering 175 globally distributed Regions of Interest recorded in one of four seasons throughout the year of 2018. For each region, paired and co-registered synthetic aperture radar (SAR) Sentinel-1 measurements as well as cloudy and cloud-free optical multi-spectral Sentinel-2 observations from European Space Agency's Copernicus mission are provided. The Sentinel satellites provide public access data and are among the most prominent satellites in Earth observation.
11 PAPERS • 1 BENCHMARK
Multimodal material segmentation (MCubeS) dataset contains 500 sets of images from 42 street scenes. Each scene has images for four modalities: RGB, angle of linear polarization (AoLP), degree of linear polarization (DoLP), and near-infrared (NIR). The dataset provides annotated ground truth labels for both material and semantic segmentation for every pixel. The dataset is divided training set with 302 image sets, validation set with 96 image sets, and test set with 102 image sets. Each image has 1224 x 1024 pixels and a total of 20 class labels per pixel.
10 PAPERS • 1 BENCHMARK
The High-Quality Wide Multi-Channel Attack database (HQ-WMCA) database consists of 2904 short multi-modal video recordings of both bona-fide and presentation attacks. There are 555 bonafide presentations from 51 participants and the remaining 2349 are presentation attacks. The data is recorded from several channels including color, depth, thermal, infrared (spectra), and short-wave infrared (spectra).
8 PAPERS • NO BENCHMARKS YET
The RIT-18 dataset was built for the semantic segmentation of remote sensing imagery. It was collected with the Tetracam Micro-MCA6 multispectral imaging sensor flown on-board a DJI-1000 octocopter.
SEN12MS-CR-TS is a multi-modal and multi-temporal data set for cloud removal. It contains time-series of paired and co-registered Sentinel-1 and cloudy as well as cloud-free Sentinel-2 data from European Space Agency's Copernicus mission. Each time series contains 30 cloudy and clear observations regularly sampled throughout the year 2018. Our multi-temporal data set is readily pre-processed and backward-compatible with SEN12MS-CR.
7 PAPERS • 1 BENCHMARK
HS-SOD is a hyperspectral salient object detection dataset with a collection of 60 hyperspectral images with their respective ground-truth binary images and representative rendered colour images (sRGB).
5 PAPERS • NO BENCHMARKS YET
Houston is a hyperspectral image classification dataset. The hyperspectral imagery consists of 144 spectral bands in the 380 nm to 1050 nm region and has been calibrated to at-sensor spectral radiance units, SRU =$ \mu \text{W} /( \text{cm}^2 \text{ sr nm})$. The corresponding co-registered DSM consists of elevation in meters above sea level (per the Geoid 2012A model).
5 PAPERS • 1 BENCHMARK
Botswana is a hyperspectral image classification dataset. The NASA EO-1 satellite acquired a sequence of data over the Okavango Delta, Botswana in 2001-2004. The Hyperion sensor on EO-1 acquires data at 30 m pixel resolution over a 7.7 km strip in 242 bands covering the 400-2500 nm portion of the spectrum in 10 nm windows. Preprocessing of the data was performed by the UT Center for Space Research to mitigate the effects of bad detectors, inter-detector miscalibration, and intermittent anomalies. Uncalibrated and noisy bands that cover water absorption features were removed, and the remaining 145 bands were included as candidate features: [10-55, 82-97, 102-119, 134-164, 187-220]. The data analyzed in this study, acquired May 31, 2001, consist of observations from 14 identified classes representing the land cover types in seasonal swamps, occasional swamps, and drier woodlands located in the distal portion of the Delta.
4 PAPERS • 1 BENCHMARK
PIDray is a large-scale dataset which covers various cases in real-world scenarios for prohibited item detection, especially for deliberately hidden items. The dataset contains 12 categories of prohibited items in 47, 677 X-ray images with high-quality annotated segmentation masks and bounding boxes.
4 PAPERS • NO BENCHMARKS YET
EuroCrops is a dataset for automatic vegetation classification from multi-spectral and multi-temporal satellite data, annotated with official LIPS reporting data from countries of the European Union, curated by the Technical University of Munich and GAF AG. The project is managed by the DLR Space Administration and funded by BMWI (Federal Ministry for Economic Affairs and Energy). This dataset is publicly available for research causes with the idea in mind to assist in the subsidy control of agricultural self-declarations.
3 PAPERS • NO BENCHMARKS YET
HiXray is a High-quality X-ray security inspection image dataset, which contains 102,928 common prohibited items of 8 categories. It has been gathered from the real-world airport security inspection and annotated by professional security inspectors
SSL4EO-S12 is a large-scale, global, multimodal, and multi-seasonal corpus of satellite imagery from the ESA Sentinel-1 & -2 satellite missions.
The data set covers recordings of ripening fruit with labels of destructive measurements (fruit flesh firmness, sugar content and overall ripeness). The labels are provided within three categories (firmness, sweetness and overall ripeness). Four measurement series were performed. Besides 1018 labeled recordings, the data set contains 4671 recordings without ripeness label.
2 PAPERS • NO BENCHMARKS YET
This dataset contains the data for the paper 'Using Multiple Instance Learning for Explainable Solar Flare Prediction'.
Pavia Centre is a hyperspectral dataset acquired by the ROSIS sensor during a flight campaign over Pavia, northern Italy. The number of spectral bands is 102 for Pavia Centre. Pavia Centre is a 1096*1096 pixels image. The geometric resolution is 1.3 meters. Image groundtruths differentiate 9 classes each. Pavia scenes were provided by Prof. Paolo Gamba from the Telecommunications and Remote Sensing Laboratory, Pavia university (Italy).
The dataset of Thermal Bridges on Building Rooftops (TBBR dataset) consists of annotated combined RGB and thermal drone images with a height map. All images were converted to a uniform format of 3000$\times$4000 pixels, aligned, and cropped to 2400$\times$3400 to remove empty borders.
2 PAPERS • 2 BENCHMARKS
Urban is one of the most widely used hyperspectral data used in the hyperspectral unmixing study. There are 307x307 pixels, each of which corresponds to a 2x2 m2 area. In this image, there are 210 wavelengths ranging from 400 nm to 2500 nm, resulting in a spectral resolution of 10 nm. After the channels 1-4, 76, 87, 101-111, 136-153 and 198-210 are removed (due to dense water vapor and atmospheric effects), we remain 162 channels (this is a common preprocess for hyperspectral unmixing analyses). There are three versions of ground truth, which contain 4, 5 and 6 endmembers respectively, which are introduced in the ground truth.
WHU-Hi dataset (Wuhan UAV-borne hyperspectral image) is collected and shared by the RSIDEA research group of Wuhan University, and it could serve as a benchmark dataset for precise crop classification and hyperspectral image classification studies. The WHU-Hi dataset contains three individual UAV-borne hyperspectral datasets: WHU-Hi-LongKou, WHU-Hi-HanChuan, and WHU-Hi-HongHu. All the datasets were acquired in farming areas with various crop types in Hubei province, China, via a Headwall Nano-Hyperspec sensor mounted on a UAV platform. Compared with spaceborne and airborne hyperspectral platforms, unmanned aerial vehicle (UAV)-borne hyperspectral systems can acquire hyperspectral imagery with a high spatial resolution (which we refer to here as H2 imagery). The research was published in Remote Sensing of Environment.
We capture some hyperspectral images in our lab using the multishot DD-CASSI architecture. The algorithm can be found on GitHub
1 PAPER • NO BENCHMARKS YET
This dataset inclue multi-spectral acquisition of vegetation for the conception of new DeepIndices. The images were acquired with the Airphen (Hyphen, Avignon, France) six-band multi-spectral camera configured using the 450/570/675/710/730/850 nm bands with a 10 nm FWHM. The dataset were acquired on the site of INRAe in Montoldre (Allier, France, at 46°20'30.3"N 3°26'03.6"E) within the framework of the “RoSE challenge” founded by the French National Research Agency (ANR) and in Dijon (Burgundy, France, at 47°18'32.5"N 5°04'01.8"E) within the site of AgroSup Dijon. Images of bean and corn, containing various natural weeds (yarrows, amaranth, geranium, plantago, etc) and sowed ones (mustards, goosefoots, mayweed and ryegrass) with very distinct characteristics in terms of illumination (shadow, morning, evening, full sun, cloudy, rain, ...) were acquired in top-down view at 1.8 meter from the ground. (2020-05-01)
1 PAPER • 1 BENCHMARK
The dataset contains full-spectral autofluorescence lifetime microscopic images (FS-FLIM) acquired on unstained ex-vivo human lung tissue, where 100 4D hypercubes of 256x256 (spatial resolution) x 32 (time bins) x 512 (spectral channels from 500nm to 780nm). This dataset associates with our paper "Deep Learning-Assisted Co-registration of Full-Spectral Autofluorescence Lifetime Microscopic Images with H&E-Stained Histology Images" (https://arxiv.org/abs/2202.07755) and "Full spectrum fluorescence lifetime imaging with 0.5 nm spectral and 50 ps temporal resolution" (https://doi.org/10.1038/s41467-021-26837-0). The FS-FLIM images provide transformative insights into human lung cancer with extra-dimensional information. This will enable visual and precise detection of early lung cancer. With the methodology in our co-registration paper, FS-FLIM images can be registered with H&E-stained histology images, allowing characterisation of tumour and surrounding cells at a celluar level with abs
The LIB-HSI dataset contains hyperspectral reflectance images and their corresponding RGB images of building façades in a light industrial environment. The dataset also contains pixel-level annotated images for each hyperspectral/RGB image. The LIB-HSI dataset was created to develop deep learning methods for segmenting building facade materials.
This data set contains weekly scans of cauliflower and broccoli covering a ten week growth cycle from transplant to harvest. The data set includes ground-truth, physical characteristics of the crop; environmental data collected by a weather station and a soil-senor network; and scans of the crop performed by an autonomous agricultural robot, which include stereo colour, thermal and hyperspectral imagery. The crop were planted at Lansdowne Farm, a University of Sydney agricultural research and teaching facility. Lansdowne Farm is located in Cobbitty, a suburb 70km south-west of Sydney in New South Wales (NSW), Australia. Four 80 metre raised crop beds were prepared with a North-South orientation. Approximately 144 Brassica were planted in each bed. Cauliflower were planted in the first and third bed (from west to east). Broccoli were planted in the second and fourth beds.
The Multi-Spectral Imaging via Computed Tomography (MUSIC) dataset is a two-part (2D- and 3D spectral) open access dataset for advanced image analysis of spectral radiographic (x-ray) scans, their tomographic reconstruction and the detection of specific materials within such scans. The scans operate at a photon energy range of around 20 keV up to 160 keV.
A high-resolution multi-sensor remote sensing scene classification dataset, appropriate for training and evaluating image classification models in the remote sensing domain.
The ROAD dataset is made up of observations from the Low Frequency Array (LOFAR) telescope. LOFAR is comprised of 52 stations across Europe, where each station is an array of 96 dual polarisation low-band antennas (LBA) in the 10–90 MHz range and 48 or 96 dual polarisation high-band antenna antennas (HBA) in the 110–250 MHz range. The data are four dimensional, with the dimensions corresponding to time, frequency, polarisation, and station. dictate the array configuration (i.e. the number of stations used), the number of frequency channels (Nf), the time sampling, as well as the overall integration time (Nt) of the observing session. Furthermore, the dual-polarisation of the antennas results in a correlation product (Npol) of size 4. The ROAD dataset contains ten classes that describe various system-wide phenomena and anomalies from data obtained by the LOFAR telescope. These classes are categorised into four groups: data processing system failures, electronic anomalies, environmental
This dataset contains the raw images for the dataset of Thermal Bridges on Building Rooftops (TBBR) dataset.
The ARPA-E funded TERRA-REF project is generating open-access reference datasets for the study of plant sensing, genomics, and phenomics. Sensor data were generated by a field scanner sensing platform that captures color, thermal, hyperspectral, and active flourescence imagery as well as three dimensional structure and associated environmental measurements. This dataset is provided alongside data collected using traditional field methods in order to support calibration and validation of algorithms used to extract plot level phenotypes from these datasets.
Dataset of paired thermal and RGB images comprising ten diverse scenes—six indoor and four outdoor scenes— for 3D scene reconstruction and novel view synthesis (e.g. with NeRF).
The increasing use of deep learning techniques has reduced interpretation time and, ideally, reduced interpreter bias by automatically deriving geological maps from digital outcrop models. However, accurate validation of these automated mapping approaches is a significant challenge due to the subjective nature of geological mapping and the difficulty in collecting quantitative validation data. Additionally, many state-of-the-art deep learning methods are limited to 2D image data, which is insufficient for 3D digital outcrops, such as hyperclouds. To address these challenges, we present Tinto, a multi-sensor benchmark digital outcrop dataset designed to facilitate the development and validation of deep learning approaches for geological mapping, especially for non-structured 3D data like point clouds. Tinto comprises two complementary sets: 1) a real digital outcrop model from Corta Atalaya (Spain), with spectral attributes and ground-truth data, and 2) a synthetic twin that uses latent
Reflectance measurements of Bidirectional Texture Functions (BTFs)
0 PAPER • NO BENCHMARKS YET