Robust Vocal Quality Feature Embeddings for Dysphonic Voice Detection

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Approximately 1.2% of the world's population has impaired voice production. As a result, automatic dysphonic voice detection has attracted considerable academic and clinical interest. However, existing methods for automated voice assessment often fail to generalize outside the training conditions or to other related applications. In this paper, we propose a deep learning framework for generating acoustic feature embeddings sensitive to vocal quality and robust across different corpora. A contrastive loss is combined with a classification loss to train our deep learning model jointly. Data warping methods are used on input voice samples to improve the robustness of our method. Empirical results demonstrate that our method not only achieves high in-corpus and cross-corpus classification accuracy but also generates good embeddings sensitive to voice quality and robust across different corpora. We also compare our results against three baseline methods on clean and three variations of deteriorated in-corpus and cross-corpus datasets and demonstrate that the proposed model consistently outperforms the baseline methods.

Original languageEnglish (US)
Pages (from-to)1348-1359
Number of pages12
JournalIEEE/ACM Transactions on Audio Speech and Language Processing
Volume31
DOIs
StatePublished - 2023

Keywords

  • Dysphonic voice
  • contrastive loss
  • embedding learning

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • Computational Mathematics
  • Electrical and Electronic Engineering
  • Acoustics and Ultrasonics

Fingerprint

Dive into the research topics of 'Robust Vocal Quality Feature Embeddings for Dysphonic Voice Detection'. Together they form a unique fingerprint.

Cite this