A study and comparison of human and deep learning recognition performance under visual distortions

Samuel Dodge, Lina Karam

Research output: Chapter in Book/Report/Conference proceedingConference contribution

21 Citations (Scopus)

Abstract

Deep neural networks (DNNs) achieve excellent performance on standard classification tasks. However, under image quality distortions such as blur and noise, classification accuracy becomes poor. In this work, we compare the performance of DNNs with human subjects on distorted images. We show that, although DNNs perform better than or on par with humans on good quality images, DNN performance is still much lower than human performance on distorted images. We additionally find that there is little correlation in errors between DNNs and human subjects. This could be an indication that the internal representation of images are different between DNNs and the human visual system. These comparisons with human performance could be used to guide future development of more robust DNNs.

Original languageEnglish (US)
Title of host publication2017 26th International Conference on Computer Communications and Networks, ICCCN 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781509029914
DOIs
StatePublished - Sep 14 2017
Event26th International Conference on Computer Communications and Networks, ICCCN 2017 - Vancouver, Canada
Duration: Jul 31 2017Aug 3 2017

Other

Other26th International Conference on Computer Communications and Networks, ICCCN 2017
CountryCanada
CityVancouver
Period7/31/178/3/17

Fingerprint

Neural Networks
Human Performance
Image Quality
Image quality
Human Visual System
Network Performance
Network performance
Vision
Learning
Human
Deep neural networks
Deep learning
Neural networks
Internal

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Networks and Communications
  • Software
  • Management of Technology and Innovation
  • Information Systems and Management
  • Safety, Risk, Reliability and Quality
  • Media Technology
  • Control and Optimization

Cite this

Dodge, S., & Karam, L. (2017). A study and comparison of human and deep learning recognition performance under visual distortions. In 2017 26th International Conference on Computer Communications and Networks, ICCCN 2017 [8038465] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICCCN.2017.8038465

A study and comparison of human and deep learning recognition performance under visual distortions. / Dodge, Samuel; Karam, Lina.

2017 26th International Conference on Computer Communications and Networks, ICCCN 2017. Institute of Electrical and Electronics Engineers Inc., 2017. 8038465.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Dodge, S & Karam, L 2017, A study and comparison of human and deep learning recognition performance under visual distortions. in 2017 26th International Conference on Computer Communications and Networks, ICCCN 2017., 8038465, Institute of Electrical and Electronics Engineers Inc., 26th International Conference on Computer Communications and Networks, ICCCN 2017, Vancouver, Canada, 7/31/17. https://doi.org/10.1109/ICCCN.2017.8038465
Dodge S, Karam L. A study and comparison of human and deep learning recognition performance under visual distortions. In 2017 26th International Conference on Computer Communications and Networks, ICCCN 2017. Institute of Electrical and Electronics Engineers Inc. 2017. 8038465 https://doi.org/10.1109/ICCCN.2017.8038465
Dodge, Samuel ; Karam, Lina. / A study and comparison of human and deep learning recognition performance under visual distortions. 2017 26th International Conference on Computer Communications and Networks, ICCCN 2017. Institute of Electrical and Electronics Engineers Inc., 2017.
@inproceedings{3fc6bf0c133f43dea16b0845e14f3c39,
title = "A study and comparison of human and deep learning recognition performance under visual distortions",
abstract = "Deep neural networks (DNNs) achieve excellent performance on standard classification tasks. However, under image quality distortions such as blur and noise, classification accuracy becomes poor. In this work, we compare the performance of DNNs with human subjects on distorted images. We show that, although DNNs perform better than or on par with humans on good quality images, DNN performance is still much lower than human performance on distorted images. We additionally find that there is little correlation in errors between DNNs and human subjects. This could be an indication that the internal representation of images are different between DNNs and the human visual system. These comparisons with human performance could be used to guide future development of more robust DNNs.",
author = "Samuel Dodge and Lina Karam",
year = "2017",
month = "9",
day = "14",
doi = "10.1109/ICCCN.2017.8038465",
language = "English (US)",
booktitle = "2017 26th International Conference on Computer Communications and Networks, ICCCN 2017",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
address = "United States",

}

TY - GEN

T1 - A study and comparison of human and deep learning recognition performance under visual distortions

AU - Dodge, Samuel

AU - Karam, Lina

PY - 2017/9/14

Y1 - 2017/9/14

N2 - Deep neural networks (DNNs) achieve excellent performance on standard classification tasks. However, under image quality distortions such as blur and noise, classification accuracy becomes poor. In this work, we compare the performance of DNNs with human subjects on distorted images. We show that, although DNNs perform better than or on par with humans on good quality images, DNN performance is still much lower than human performance on distorted images. We additionally find that there is little correlation in errors between DNNs and human subjects. This could be an indication that the internal representation of images are different between DNNs and the human visual system. These comparisons with human performance could be used to guide future development of more robust DNNs.

AB - Deep neural networks (DNNs) achieve excellent performance on standard classification tasks. However, under image quality distortions such as blur and noise, classification accuracy becomes poor. In this work, we compare the performance of DNNs with human subjects on distorted images. We show that, although DNNs perform better than or on par with humans on good quality images, DNN performance is still much lower than human performance on distorted images. We additionally find that there is little correlation in errors between DNNs and human subjects. This could be an indication that the internal representation of images are different between DNNs and the human visual system. These comparisons with human performance could be used to guide future development of more robust DNNs.

UR - http://www.scopus.com/inward/record.url?scp=85032266305&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85032266305&partnerID=8YFLogxK

U2 - 10.1109/ICCCN.2017.8038465

DO - 10.1109/ICCCN.2017.8038465

M3 - Conference contribution

BT - 2017 26th International Conference on Computer Communications and Networks, ICCCN 2017

PB - Institute of Electrical and Electronics Engineers Inc.

ER -