Abstract
Languages are best learned in immersive environments with rich feedback. This is specially true for signed languages due to their visual and poly-componential nature. Computer Aided Language Learning (CALL) solutions successfully incorporate feedback for spoken languages, but no such solution exists for signed languages. Current Sign Language Recognition (SLR) systems are not interpretable and hence not applicable to provide feedback to learners. In this work, we propose a modular and explainable machine learning system that is able to provide fine-grained feedback on location, movement and hand-shape to learners of ASL. In addition, we also propose a waterfall architecture for combining the sub-modules to prevent cognitive overload for learners and to reduce computation time for feedback. The system has an overall test accuracy of 87.9 % on real-world data consisting of 25 signs with 3 repetitions each from 100 learners.
Original language | English (US) |
---|---|
Journal | CEUR Workshop Proceedings |
Volume | 2327 |
State | Published - Jan 1 2019 |
Event | 2019 Joint ACM IUI Workshops, ACMIUI-WS 2019 - Los Angeles, United States Duration: Mar 20 2019 → … |
Keywords
- Computer-aided learning
- Explainable AI
- Sign language learning
ASJC Scopus subject areas
- General Computer Science