Abstract

Often the filters learned by Convolutional Neural Networks (CNNs) from different image datasets appear similar. This similarity of filters is often exploited for the purposes of transfer learning. This is also being used as an initialization technique for different tasks in the same dataset or for the same task in similar datasets. Off-the-shelf CNN features have capitalized on this idea to promote their networks as best transferable and most general and are used in a cavalier manner in day-to-day computer vision tasks. While the filters learned by these CNNs are related to the atomic structures of the images from which they are learnt, all datasets learn similar looking low-level filters. With the understanding that a dataset that contains many such atomic structures learn general filters and are therefore useful to initialize other networks with, we propose a way to analyse and quantify generality. We applied this metric on several popular character recognition, natural image and a medical image dataset, and arrive at some interesting conclusions. On further experimentation we also discovered that particular classes in a dataset themselves are more general than others.

Original languageEnglish (US)
Title of host publication2016 IEEE International Conference on Image Processing, ICIP 2016 - Proceedings
PublisherIEEE Computer Society
Pages41-45
Number of pages5
Volume2016-August
ISBN (Electronic)9781467399616
DOIs
StatePublished - Aug 3 2016
Event23rd IEEE International Conference on Image Processing, ICIP 2016 - Phoenix, United States
Duration: Sep 25 2016Sep 28 2016

Other

Other23rd IEEE International Conference on Image Processing, ICIP 2016
CountryUnited States
CityPhoenix
Period9/25/169/28/16

Fingerprint

Neural networks
Character recognition
Computer vision

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Signal Processing

Cite this

Venkatesan, R., Gatupalli, V., & Li, B. (2016). On the generality of neural image features. In 2016 IEEE International Conference on Image Processing, ICIP 2016 - Proceedings (Vol. 2016-August, pp. 41-45). [7532315] IEEE Computer Society. https://doi.org/10.1109/ICIP.2016.7532315

On the generality of neural image features. / Venkatesan, Ragav; Gatupalli, Vijetha; Li, Baoxin.

2016 IEEE International Conference on Image Processing, ICIP 2016 - Proceedings. Vol. 2016-August IEEE Computer Society, 2016. p. 41-45 7532315.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Venkatesan, R, Gatupalli, V & Li, B 2016, On the generality of neural image features. in 2016 IEEE International Conference on Image Processing, ICIP 2016 - Proceedings. vol. 2016-August, 7532315, IEEE Computer Society, pp. 41-45, 23rd IEEE International Conference on Image Processing, ICIP 2016, Phoenix, United States, 9/25/16. https://doi.org/10.1109/ICIP.2016.7532315
Venkatesan R, Gatupalli V, Li B. On the generality of neural image features. In 2016 IEEE International Conference on Image Processing, ICIP 2016 - Proceedings. Vol. 2016-August. IEEE Computer Society. 2016. p. 41-45. 7532315 https://doi.org/10.1109/ICIP.2016.7532315
Venkatesan, Ragav ; Gatupalli, Vijetha ; Li, Baoxin. / On the generality of neural image features. 2016 IEEE International Conference on Image Processing, ICIP 2016 - Proceedings. Vol. 2016-August IEEE Computer Society, 2016. pp. 41-45
@inproceedings{83cd1e349a9443a5b80d801447b0ae43,
title = "On the generality of neural image features",
abstract = "Often the filters learned by Convolutional Neural Networks (CNNs) from different image datasets appear similar. This similarity of filters is often exploited for the purposes of transfer learning. This is also being used as an initialization technique for different tasks in the same dataset or for the same task in similar datasets. Off-the-shelf CNN features have capitalized on this idea to promote their networks as best transferable and most general and are used in a cavalier manner in day-to-day computer vision tasks. While the filters learned by these CNNs are related to the atomic structures of the images from which they are learnt, all datasets learn similar looking low-level filters. With the understanding that a dataset that contains many such atomic structures learn general filters and are therefore useful to initialize other networks with, we propose a way to analyse and quantify generality. We applied this metric on several popular character recognition, natural image and a medical image dataset, and arrive at some interesting conclusions. On further experimentation we also discovered that particular classes in a dataset themselves are more general than others.",
author = "Ragav Venkatesan and Vijetha Gatupalli and Baoxin Li",
year = "2016",
month = "8",
day = "3",
doi = "10.1109/ICIP.2016.7532315",
language = "English (US)",
volume = "2016-August",
pages = "41--45",
booktitle = "2016 IEEE International Conference on Image Processing, ICIP 2016 - Proceedings",
publisher = "IEEE Computer Society",
address = "United States",

}

TY - GEN

T1 - On the generality of neural image features

AU - Venkatesan, Ragav

AU - Gatupalli, Vijetha

AU - Li, Baoxin

PY - 2016/8/3

Y1 - 2016/8/3

N2 - Often the filters learned by Convolutional Neural Networks (CNNs) from different image datasets appear similar. This similarity of filters is often exploited for the purposes of transfer learning. This is also being used as an initialization technique for different tasks in the same dataset or for the same task in similar datasets. Off-the-shelf CNN features have capitalized on this idea to promote their networks as best transferable and most general and are used in a cavalier manner in day-to-day computer vision tasks. While the filters learned by these CNNs are related to the atomic structures of the images from which they are learnt, all datasets learn similar looking low-level filters. With the understanding that a dataset that contains many such atomic structures learn general filters and are therefore useful to initialize other networks with, we propose a way to analyse and quantify generality. We applied this metric on several popular character recognition, natural image and a medical image dataset, and arrive at some interesting conclusions. On further experimentation we also discovered that particular classes in a dataset themselves are more general than others.

AB - Often the filters learned by Convolutional Neural Networks (CNNs) from different image datasets appear similar. This similarity of filters is often exploited for the purposes of transfer learning. This is also being used as an initialization technique for different tasks in the same dataset or for the same task in similar datasets. Off-the-shelf CNN features have capitalized on this idea to promote their networks as best transferable and most general and are used in a cavalier manner in day-to-day computer vision tasks. While the filters learned by these CNNs are related to the atomic structures of the images from which they are learnt, all datasets learn similar looking low-level filters. With the understanding that a dataset that contains many such atomic structures learn general filters and are therefore useful to initialize other networks with, we propose a way to analyse and quantify generality. We applied this metric on several popular character recognition, natural image and a medical image dataset, and arrive at some interesting conclusions. On further experimentation we also discovered that particular classes in a dataset themselves are more general than others.

UR - http://www.scopus.com/inward/record.url?scp=85006795601&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85006795601&partnerID=8YFLogxK

U2 - 10.1109/ICIP.2016.7532315

DO - 10.1109/ICIP.2016.7532315

M3 - Conference contribution

AN - SCOPUS:85006795601

VL - 2016-August

SP - 41

EP - 45

BT - 2016 IEEE International Conference on Image Processing, ICIP 2016 - Proceedings

PB - IEEE Computer Society

ER -