Abstract
In subjective evaluation of dysarthric speech, the inter-rater agreement between clinicians can be low. Disagreement among clinicians results from differences in their perceptual assessment abilities, familiarization with a client, clinical experiences, etc. Recently, there has been interest in developing signal processing and machine learning models for objective evaluation of subjective speech quality. In this paper, we propose a new method to address this problem by collecting subjective ratings from multiple evaluators and modeling the reliability of each annotator within a machine learning framework. In contrast to previous work, our model explicitly models the dependence of the speaker on an evaluators reliability. We evaluate the model on a series of experiments on a dysarthric speech database and show that our method outperforms other similar approaches.
Original language | English (US) |
---|---|
Title of host publication | Conference Record of the 50th Asilomar Conference on Signals, Systems and Computers, ACSSC 2016 |
Publisher | IEEE Computer Society |
Pages | 827-830 |
Number of pages | 4 |
ISBN (Electronic) | 9781538639542 |
DOIs | |
State | Published - Mar 1 2017 |
Event | 50th Asilomar Conference on Signals, Systems and Computers, ACSSC 2016 - Pacific Grove, United States Duration: Nov 6 2016 → Nov 9 2016 |
Other
Other | 50th Asilomar Conference on Signals, Systems and Computers, ACSSC 2016 |
---|---|
Country/Territory | United States |
City | Pacific Grove |
Period | 11/6/16 → 11/9/16 |
ASJC Scopus subject areas
- Signal Processing
- Computer Networks and Communications