Visual attention quality database for benchmarking performance evaluation metrics

Milind S. Gide, Samuel F. Dodge, Lina Karam

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed. These models are evaluated by using performance evaluation metrics that measure how well a predicted map matches eye-tracking data obtained from human observers. Though there are a number of existing performance evaluation metrics, there is no clear consensus on which evaluation metric is the best. This work proposes a subjective study that uses ratings from human observers to evaluate saliency maps computed by existing VA models based on comparing the maps visually with ground-truth maps obtained from eye-tracking data. The subjective ratings are correlated with the scores obtained from existing as well as a proposed objective VA performance evaluation metric using several correlation measures. The correlation results show that the proposed objective VA metric outperforms the existing metrics.

Original languageEnglish (US)
Title of host publication2016 IEEE International Conference on Image Processing, ICIP 2016 - Proceedings
PublisherIEEE Computer Society
Pages2792-2796
Number of pages5
Volume2016-August
ISBN (Electronic)9781467399616
DOIs
StatePublished - Aug 3 2016
Event23rd IEEE International Conference on Image Processing, ICIP 2016 - Phoenix, United States
Duration: Sep 25 2016Sep 28 2016

Other

Other23rd IEEE International Conference on Image Processing, ICIP 2016
CountryUnited States
CityPhoenix
Period9/25/169/28/16

Fingerprint

Benchmarking

Keywords

  • Subjective Study
  • VA Models
  • VA Performance Metrics
  • Visual Attention

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Signal Processing

Cite this

Gide, M. S., Dodge, S. F., & Karam, L. (2016). Visual attention quality database for benchmarking performance evaluation metrics. In 2016 IEEE International Conference on Image Processing, ICIP 2016 - Proceedings (Vol. 2016-August, pp. 2792-2796). [7532868] IEEE Computer Society. https://doi.org/10.1109/ICIP.2016.7532868

Visual attention quality database for benchmarking performance evaluation metrics. / Gide, Milind S.; Dodge, Samuel F.; Karam, Lina.

2016 IEEE International Conference on Image Processing, ICIP 2016 - Proceedings. Vol. 2016-August IEEE Computer Society, 2016. p. 2792-2796 7532868.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Gide, MS, Dodge, SF & Karam, L 2016, Visual attention quality database for benchmarking performance evaluation metrics. in 2016 IEEE International Conference on Image Processing, ICIP 2016 - Proceedings. vol. 2016-August, 7532868, IEEE Computer Society, pp. 2792-2796, 23rd IEEE International Conference on Image Processing, ICIP 2016, Phoenix, United States, 9/25/16. https://doi.org/10.1109/ICIP.2016.7532868
Gide MS, Dodge SF, Karam L. Visual attention quality database for benchmarking performance evaluation metrics. In 2016 IEEE International Conference on Image Processing, ICIP 2016 - Proceedings. Vol. 2016-August. IEEE Computer Society. 2016. p. 2792-2796. 7532868 https://doi.org/10.1109/ICIP.2016.7532868
Gide, Milind S. ; Dodge, Samuel F. ; Karam, Lina. / Visual attention quality database for benchmarking performance evaluation metrics. 2016 IEEE International Conference on Image Processing, ICIP 2016 - Proceedings. Vol. 2016-August IEEE Computer Society, 2016. pp. 2792-2796
@inproceedings{38a554da73b04d7a89630b9b1192e796,
title = "Visual attention quality database for benchmarking performance evaluation metrics",
abstract = "With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed. These models are evaluated by using performance evaluation metrics that measure how well a predicted map matches eye-tracking data obtained from human observers. Though there are a number of existing performance evaluation metrics, there is no clear consensus on which evaluation metric is the best. This work proposes a subjective study that uses ratings from human observers to evaluate saliency maps computed by existing VA models based on comparing the maps visually with ground-truth maps obtained from eye-tracking data. The subjective ratings are correlated with the scores obtained from existing as well as a proposed objective VA performance evaluation metric using several correlation measures. The correlation results show that the proposed objective VA metric outperforms the existing metrics.",
keywords = "Subjective Study, VA Models, VA Performance Metrics, Visual Attention",
author = "Gide, {Milind S.} and Dodge, {Samuel F.} and Lina Karam",
year = "2016",
month = "8",
day = "3",
doi = "10.1109/ICIP.2016.7532868",
language = "English (US)",
volume = "2016-August",
pages = "2792--2796",
booktitle = "2016 IEEE International Conference on Image Processing, ICIP 2016 - Proceedings",
publisher = "IEEE Computer Society",
address = "United States",

}

TY - GEN

T1 - Visual attention quality database for benchmarking performance evaluation metrics

AU - Gide, Milind S.

AU - Dodge, Samuel F.

AU - Karam, Lina

PY - 2016/8/3

Y1 - 2016/8/3

N2 - With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed. These models are evaluated by using performance evaluation metrics that measure how well a predicted map matches eye-tracking data obtained from human observers. Though there are a number of existing performance evaluation metrics, there is no clear consensus on which evaluation metric is the best. This work proposes a subjective study that uses ratings from human observers to evaluate saliency maps computed by existing VA models based on comparing the maps visually with ground-truth maps obtained from eye-tracking data. The subjective ratings are correlated with the scores obtained from existing as well as a proposed objective VA performance evaluation metric using several correlation measures. The correlation results show that the proposed objective VA metric outperforms the existing metrics.

AB - With the increased focus on visual attention (VA) in the last decade, a large number of computational visual saliency methods have been developed. These models are evaluated by using performance evaluation metrics that measure how well a predicted map matches eye-tracking data obtained from human observers. Though there are a number of existing performance evaluation metrics, there is no clear consensus on which evaluation metric is the best. This work proposes a subjective study that uses ratings from human observers to evaluate saliency maps computed by existing VA models based on comparing the maps visually with ground-truth maps obtained from eye-tracking data. The subjective ratings are correlated with the scores obtained from existing as well as a proposed objective VA performance evaluation metric using several correlation measures. The correlation results show that the proposed objective VA metric outperforms the existing metrics.

KW - Subjective Study

KW - VA Models

KW - VA Performance Metrics

KW - Visual Attention

UR - http://www.scopus.com/inward/record.url?scp=85006830410&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85006830410&partnerID=8YFLogxK

U2 - 10.1109/ICIP.2016.7532868

DO - 10.1109/ICIP.2016.7532868

M3 - Conference contribution

AN - SCOPUS:85006830410

VL - 2016-August

SP - 2792

EP - 2796

BT - 2016 IEEE International Conference on Image Processing, ICIP 2016 - Proceedings

PB - IEEE Computer Society

ER -