Abstract
Instrumental analysis of speech sometimes complements subjective evaluations in speech and language therapy; however, apart from elemental speech features such as pitch and formant statistics, higher dimensional spectral features are rarely used in practice because they are clinically uninterpretable. While these features are likely to somehow be related to clinical intervention, this relationship remains to be determined. This paper uses artificial recurrent neural networks to map high-dimensional spectral features into phonological features that are easily interpretable and provide fine-resolution information regarding articulation quality. The evaluation on a dysarthric speech data set shows strong correlation between the phonological feature measures and perceptual ratings. To increase clinical utility, we provide a new way to visualize phonological disturbances that provides clinicians with actionable information about intervention strategies.
Original language | English (US) |
---|---|
Title of host publication | 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 - Proceedings |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 5045-5049 |
Number of pages | 5 |
ISBN (Electronic) | 9781509041176 |
DOIs | |
State | Published - Jun 16 2017 |
Event | 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 - New Orleans, United States Duration: Mar 5 2017 → Mar 9 2017 |
Other
Other | 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 |
---|---|
Country | United States |
City | New Orleans |
Period | 3/5/17 → 3/9/17 |
Keywords
- clinical applications
- phonological features
- recurrent neural networks
ASJC Scopus subject areas
- Software
- Signal Processing
- Electrical and Electronic Engineering