Visual attention quality database for benchmarking performance evaluation metrics

Milind S. Gide, Samuel F. Dodge, Lina Karam

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed. These models are evaluated by using performance evaluation metrics that measure how well a predicted map matches eye-tracking data obtained from human observers. Though there are a number of existing performance evaluation metrics, there is no clear consensus on which evaluation metric is the best. This work proposes a subjective study that uses ratings from human observers to evaluate saliency maps computed by existing VA models based on comparing the maps visually with ground-truth maps obtained from eye-tracking data. The subjective ratings are correlated with the scores obtained from existing as well as a proposed objective VA performance evaluation metric using several correlation measures. The correlation results show that the proposed objective VA metric outperforms the existing metrics.

Original languageEnglish (US)
Title of host publication2016 IEEE International Conference on Image Processing, ICIP 2016 - Proceedings
PublisherIEEE Computer Society
Pages2792-2796
Number of pages5
Volume2016-August
ISBN (Electronic)9781467399616
DOIs
StatePublished - Aug 3 2016
Event23rd IEEE International Conference on Image Processing, ICIP 2016 - Phoenix, United States
Duration: Sep 25 2016Sep 28 2016

Other

Other23rd IEEE International Conference on Image Processing, ICIP 2016
Country/TerritoryUnited States
CityPhoenix
Period9/25/169/28/16

Keywords

  • Subjective Study
  • VA Models
  • VA Performance Metrics
  • Visual Attention

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Signal Processing

Fingerprint

Dive into the research topics of 'Visual attention quality database for benchmarking performance evaluation metrics'. Together they form a unique fingerprint.

Cite this