Simulating Dysarthric Speech for Training Data Augmentation in Clinical Speech Applications

Yishan Jiao, Ming Tu, Visar Berisha, Julie Liss

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

Training machine learning algorithms for speech applications requires large, labeled training data sets. This is problematic for clinical applications where obtaining such data is prohibitively expensive because of privacy concerns or lack of access. As a result, clinical speech applications typically rely on small data sets with only tens of speakers. In this paper, we propose a method for simulating training data for clinical applications by transforming healthy speech to dysarthric speech using adversarial training. We evaluate the efficacy of our approach using both objective and subjective criteria. We present the transformed samples to five experienced speech-language pathologists (SLPs) and ask them to identify the samples as healthy or dysarthric. The results reveal that the SLPs identify the transformed speech as dysarthric 65% of the time. In a pilot classification experiment, we show that by using the simulated speech samples to balance an existing dataset, the classification accuracy improves by rv 10% after data augmentation.

Original languageEnglish (US)
Title of host publication2018 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2018 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages6009-6013
Number of pages5
Volume2018-April
ISBN (Print)9781538646588
DOIs
StatePublished - Sep 10 2018
Event2018 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2018 - Calgary, Canada
Duration: Apr 15 2018Apr 20 2018

Other

Other2018 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2018
CountryCanada
CityCalgary
Period4/15/184/20/18

Fingerprint

Learning algorithms
Learning systems
Experiments

Keywords

  • Adversarial training
  • Data augmentation
  • Dysarthric speech
  • Voice conversion

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Cite this

Jiao, Y., Tu, M., Berisha, V., & Liss, J. (2018). Simulating Dysarthric Speech for Training Data Augmentation in Clinical Speech Applications. In 2018 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2018 - Proceedings (Vol. 2018-April, pp. 6009-6013). [8462290] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICASSP.2018.8462290

Simulating Dysarthric Speech for Training Data Augmentation in Clinical Speech Applications. / Jiao, Yishan; Tu, Ming; Berisha, Visar; Liss, Julie.

2018 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2018 - Proceedings. Vol. 2018-April Institute of Electrical and Electronics Engineers Inc., 2018. p. 6009-6013 8462290.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Jiao, Y, Tu, M, Berisha, V & Liss, J 2018, Simulating Dysarthric Speech for Training Data Augmentation in Clinical Speech Applications. in 2018 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2018 - Proceedings. vol. 2018-April, 8462290, Institute of Electrical and Electronics Engineers Inc., pp. 6009-6013, 2018 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2018, Calgary, Canada, 4/15/18. https://doi.org/10.1109/ICASSP.2018.8462290
Jiao Y, Tu M, Berisha V, Liss J. Simulating Dysarthric Speech for Training Data Augmentation in Clinical Speech Applications. In 2018 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2018 - Proceedings. Vol. 2018-April. Institute of Electrical and Electronics Engineers Inc. 2018. p. 6009-6013. 8462290 https://doi.org/10.1109/ICASSP.2018.8462290
Jiao, Yishan ; Tu, Ming ; Berisha, Visar ; Liss, Julie. / Simulating Dysarthric Speech for Training Data Augmentation in Clinical Speech Applications. 2018 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2018 - Proceedings. Vol. 2018-April Institute of Electrical and Electronics Engineers Inc., 2018. pp. 6009-6013
@inproceedings{ec9f0795967e4a9da0c10bb4d20c80ff,
title = "Simulating Dysarthric Speech for Training Data Augmentation in Clinical Speech Applications",
abstract = "Training machine learning algorithms for speech applications requires large, labeled training data sets. This is problematic for clinical applications where obtaining such data is prohibitively expensive because of privacy concerns or lack of access. As a result, clinical speech applications typically rely on small data sets with only tens of speakers. In this paper, we propose a method for simulating training data for clinical applications by transforming healthy speech to dysarthric speech using adversarial training. We evaluate the efficacy of our approach using both objective and subjective criteria. We present the transformed samples to five experienced speech-language pathologists (SLPs) and ask them to identify the samples as healthy or dysarthric. The results reveal that the SLPs identify the transformed speech as dysarthric 65{\%} of the time. In a pilot classification experiment, we show that by using the simulated speech samples to balance an existing dataset, the classification accuracy improves by rv 10{\%} after data augmentation.",
keywords = "Adversarial training, Data augmentation, Dysarthric speech, Voice conversion",
author = "Yishan Jiao and Ming Tu and Visar Berisha and Julie Liss",
year = "2018",
month = "9",
day = "10",
doi = "10.1109/ICASSP.2018.8462290",
language = "English (US)",
isbn = "9781538646588",
volume = "2018-April",
pages = "6009--6013",
booktitle = "2018 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2018 - Proceedings",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Simulating Dysarthric Speech for Training Data Augmentation in Clinical Speech Applications

AU - Jiao, Yishan

AU - Tu, Ming

AU - Berisha, Visar

AU - Liss, Julie

PY - 2018/9/10

Y1 - 2018/9/10

N2 - Training machine learning algorithms for speech applications requires large, labeled training data sets. This is problematic for clinical applications where obtaining such data is prohibitively expensive because of privacy concerns or lack of access. As a result, clinical speech applications typically rely on small data sets with only tens of speakers. In this paper, we propose a method for simulating training data for clinical applications by transforming healthy speech to dysarthric speech using adversarial training. We evaluate the efficacy of our approach using both objective and subjective criteria. We present the transformed samples to five experienced speech-language pathologists (SLPs) and ask them to identify the samples as healthy or dysarthric. The results reveal that the SLPs identify the transformed speech as dysarthric 65% of the time. In a pilot classification experiment, we show that by using the simulated speech samples to balance an existing dataset, the classification accuracy improves by rv 10% after data augmentation.

AB - Training machine learning algorithms for speech applications requires large, labeled training data sets. This is problematic for clinical applications where obtaining such data is prohibitively expensive because of privacy concerns or lack of access. As a result, clinical speech applications typically rely on small data sets with only tens of speakers. In this paper, we propose a method for simulating training data for clinical applications by transforming healthy speech to dysarthric speech using adversarial training. We evaluate the efficacy of our approach using both objective and subjective criteria. We present the transformed samples to five experienced speech-language pathologists (SLPs) and ask them to identify the samples as healthy or dysarthric. The results reveal that the SLPs identify the transformed speech as dysarthric 65% of the time. In a pilot classification experiment, we show that by using the simulated speech samples to balance an existing dataset, the classification accuracy improves by rv 10% after data augmentation.

KW - Adversarial training

KW - Data augmentation

KW - Dysarthric speech

KW - Voice conversion

UR - http://www.scopus.com/inward/record.url?scp=85054290364&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85054290364&partnerID=8YFLogxK

U2 - 10.1109/ICASSP.2018.8462290

DO - 10.1109/ICASSP.2018.8462290

M3 - Conference contribution

AN - SCOPUS:85054290364

SN - 9781538646588

VL - 2018-April

SP - 6009

EP - 6013

BT - 2018 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2018 - Proceedings

PB - Institute of Electrical and Electronics Engineers Inc.

ER -