11 Scopus citations

Abstract

Existing speech classification algorithms often perform well when evaluated on training and test data drawn from the same distribution. In practice, however, these distributions are not always the same. In these circumstances, the performance of trained models will likely decrease. In this paper, we discuss an underutilized divergence measure and derive an estimable upper bound on the test error rate that depends on the error rate on the training data and the distance between training and test distributions. Using this bound as motivation, we develop a feature learning algorithm that aims to identify invariant speech features that generalize well to data similar to, but different from, the training set. Comparative results confirm the efficacy of the algorithm on a set of cross-domain speech classification tasks.

Original languageEnglish (US)
Title of host publication2014 IEEE Workshop on Spoken Language Technology, SLT 2014 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages77-82
Number of pages6
ISBN (Electronic)9781479971299
DOIs
StatePublished - Apr 1 2014
Event2014 IEEE Workshop on Spoken Language Technology, SLT 2014 - South Lake Tahoe, United States
Duration: Dec 7 2014Dec 10 2014

Publication series

Name2014 IEEE Workshop on Spoken Language Technology, SLT 2014 - Proceedings

Other

Other2014 IEEE Workshop on Spoken Language Technology, SLT 2014
Country/TerritoryUnited States
CitySouth Lake Tahoe
Period12/7/1412/10/14

Keywords

  • Domain adaptation
  • Feature selection
  • Machine learning
  • Pathological speech analysis

ASJC Scopus subject areas

  • Computer Science Applications
  • Human-Computer Interaction
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence
  • Language and Linguistics

Fingerprint

Dive into the research topics of 'Domain invariant speech features using a new divergence measure'. Together they form a unique fingerprint.

Cite this