Unconstrained ear recognition using deep neural networks

Samuel Dodge, Jinane Mounsef, Lina Karam

Research output: Contribution to journalArticle

9 Citations (Scopus)

Abstract

The authors perform unconstrained ear recognition using transfer learning with deep neural networks (DNNs). First, they show how existing DNNs can be used as a feature extractor. The extracted features are used by a shallow classifier to perform ear recognition. Performance can be improved by augmenting the training dataset with small image transformations. Next, they compare the performance of the feature-extraction models with fine-tuned networks. However, because the datasets are limited in size, a fine-tuned network tends to over-fit. They propose a deep learning-based averaging ensemble to reduce the effect of over-fitting. Performance results are provided on unconstrained ear recognition datasets, the AWE and CVLE datasets as well as a combined AWE + CVLE dataset. They show that their ensemble results in the best recognition performance on these datasets as compared to DNN feature-extraction based models and single fine-tuned models.

Original languageEnglish (US)
Pages (from-to)207-214
Number of pages8
JournalIET Biometrics
Volume7
Issue number3
DOIs
StatePublished - May 1 2018

Fingerprint

Feature extraction
Classifiers
Deep neural networks
Deep learning

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition

Cite this

Unconstrained ear recognition using deep neural networks. / Dodge, Samuel; Mounsef, Jinane; Karam, Lina.

In: IET Biometrics, Vol. 7, No. 3, 01.05.2018, p. 207-214.

Research output: Contribution to journalArticle

Dodge, Samuel ; Mounsef, Jinane ; Karam, Lina. / Unconstrained ear recognition using deep neural networks. In: IET Biometrics. 2018 ; Vol. 7, No. 3. pp. 207-214.
@article{20faf7b49b05476b8b48ce945d53aab0,
title = "Unconstrained ear recognition using deep neural networks",
abstract = "The authors perform unconstrained ear recognition using transfer learning with deep neural networks (DNNs). First, they show how existing DNNs can be used as a feature extractor. The extracted features are used by a shallow classifier to perform ear recognition. Performance can be improved by augmenting the training dataset with small image transformations. Next, they compare the performance of the feature-extraction models with fine-tuned networks. However, because the datasets are limited in size, a fine-tuned network tends to over-fit. They propose a deep learning-based averaging ensemble to reduce the effect of over-fitting. Performance results are provided on unconstrained ear recognition datasets, the AWE and CVLE datasets as well as a combined AWE + CVLE dataset. They show that their ensemble results in the best recognition performance on these datasets as compared to DNN feature-extraction based models and single fine-tuned models.",
author = "Samuel Dodge and Jinane Mounsef and Lina Karam",
year = "2018",
month = "5",
day = "1",
doi = "10.1049/iet-bmt.2017.0208",
language = "English (US)",
volume = "7",
pages = "207--214",
journal = "IET Biometrics",
issn = "2047-4938",
publisher = "The Institution of Engineering and Technology",
number = "3",

}

TY - JOUR

T1 - Unconstrained ear recognition using deep neural networks

AU - Dodge, Samuel

AU - Mounsef, Jinane

AU - Karam, Lina

PY - 2018/5/1

Y1 - 2018/5/1

N2 - The authors perform unconstrained ear recognition using transfer learning with deep neural networks (DNNs). First, they show how existing DNNs can be used as a feature extractor. The extracted features are used by a shallow classifier to perform ear recognition. Performance can be improved by augmenting the training dataset with small image transformations. Next, they compare the performance of the feature-extraction models with fine-tuned networks. However, because the datasets are limited in size, a fine-tuned network tends to over-fit. They propose a deep learning-based averaging ensemble to reduce the effect of over-fitting. Performance results are provided on unconstrained ear recognition datasets, the AWE and CVLE datasets as well as a combined AWE + CVLE dataset. They show that their ensemble results in the best recognition performance on these datasets as compared to DNN feature-extraction based models and single fine-tuned models.

AB - The authors perform unconstrained ear recognition using transfer learning with deep neural networks (DNNs). First, they show how existing DNNs can be used as a feature extractor. The extracted features are used by a shallow classifier to perform ear recognition. Performance can be improved by augmenting the training dataset with small image transformations. Next, they compare the performance of the feature-extraction models with fine-tuned networks. However, because the datasets are limited in size, a fine-tuned network tends to over-fit. They propose a deep learning-based averaging ensemble to reduce the effect of over-fitting. Performance results are provided on unconstrained ear recognition datasets, the AWE and CVLE datasets as well as a combined AWE + CVLE dataset. They show that their ensemble results in the best recognition performance on these datasets as compared to DNN feature-extraction based models and single fine-tuned models.

UR - http://www.scopus.com/inward/record.url?scp=85045666197&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85045666197&partnerID=8YFLogxK

U2 - 10.1049/iet-bmt.2017.0208

DO - 10.1049/iet-bmt.2017.0208

M3 - Article

AN - SCOPUS:85045666197

VL - 7

SP - 207

EP - 214

JO - IET Biometrics

JF - IET Biometrics

SN - 2047-4938

IS - 3

ER -