|
||
Website deprecated and outdated. Click here for the new site. | ||
Multi-Illuminant DatasetEvaluating the performance of color constancy algorithms is a challenging task. Traditional datasets have been created under the assumption of a single, globally uniform illuminant. This assumption can be directly enforced in laboratory conditions, and holds also for natural images to some extend. However, a large number of real-world images contains at least two dominant illuminants. To address this issues, we recently proposed two approaches to recover the illuminant color locally (see also our color and reflectance page). Implicitly, a local estimation allows to compensate multiple illuminants more accurately. However, the evaluation of such methods becomes much more difficult. Ground truth for multi-illuminant scenes is generally a tradeoff between scene realism and precision. For our work, we chose an approach to obtain almost pixelwise ground-truth on laboratory data.
The gray color was reflecting almost neutral. In separate experiments, we evaluated the color impression of the paint. Note that the so-obtained ground truth discards all effects from interreflections. However, for our task at hand, we assumed that interreflections play a subordinate role. Data CollectionThis data was captured with a Canon EOS 550D and a Sigma 17-70 lense. The aperture and ISO settings are the same for all the images. The scenes were light using two Reuter's lamps and ambient light. The data is available as PNG images without gamma and without automatic white balance. Upon request, we are also happy to provide the RAW Files (*.cr2), which are omitted from the download link to reduce the download size. Only a simple debayering was applied by averageing the green channels. Filters used:
Colors used:
Scenes:
"colorchecker" and "reference" have been captured for reference purposes. The benchmark dataset consists of the four remaining scenes "chalk", "fruits", "figures" and "rabbit". Filename of the original scene: DownloadThe dataset is available here (MD5: f5e32c5f3943e5ca26a0e6f5dd22d046). The project page with evaluation results and the code to obtain these results is here. The data was captured by Michael Bleier. If you use the dataset, please cite
|