Learn2Sign: Explainable AI for sign language learning

Prajwal Paudyal, Junghyo Lee, Azamat Kamzin, Mohamad Soudki, Ayan Banerjee, Sandeep Gupta

Research output: Contribution to journalConference article

Abstract

Languages are best learned in immersive environments with rich feedback. This is specially true for signed languages due to their visual and poly-componential nature. Computer Aided Language Learning (CALL) solutions successfully incorporate feedback for spoken languages, but no such solution exists for signed languages. Current Sign Language Recognition (SLR) systems are not interpretable and hence not applicable to provide feedback to learners. In this work, we propose a modular and explainable machine learning system that is able to provide fine-grained feedback on location, movement and hand-shape to learners of ASL. In addition, we also propose a waterfall architecture for combining the sub-modules to prevent cognitive overload for learners and to reduce computation time for feedback. The system has an overall test accuracy of 87.9 % on real-world data consisting of 25 signs with 3 repetitions each from 100 learners.

Original languageEnglish (US)
JournalCEUR Workshop Proceedings
Volume2327
StatePublished - Jan 1 2019
Event2019 Joint ACM IUI Workshops, ACMIUI-WS 2019 - Los Angeles, United States
Duration: Mar 20 2019 → …

Fingerprint

Feedback
Learning systems

Keywords

  • Computer-aided learning
  • Explainable AI
  • Sign language learning

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Learn2Sign : Explainable AI for sign language learning. / Paudyal, Prajwal; Lee, Junghyo; Kamzin, Azamat; Soudki, Mohamad; Banerjee, Ayan; Gupta, Sandeep.

In: CEUR Workshop Proceedings, Vol. 2327, 01.01.2019.

Research output: Contribution to journalConference article

Paudyal, Prajwal ; Lee, Junghyo ; Kamzin, Azamat ; Soudki, Mohamad ; Banerjee, Ayan ; Gupta, Sandeep. / Learn2Sign : Explainable AI for sign language learning. In: CEUR Workshop Proceedings. 2019 ; Vol. 2327.
@article{22bc47cb27e64601b77959d472788681,
title = "Learn2Sign: Explainable AI for sign language learning",
abstract = "Languages are best learned in immersive environments with rich feedback. This is specially true for signed languages due to their visual and poly-componential nature. Computer Aided Language Learning (CALL) solutions successfully incorporate feedback for spoken languages, but no such solution exists for signed languages. Current Sign Language Recognition (SLR) systems are not interpretable and hence not applicable to provide feedback to learners. In this work, we propose a modular and explainable machine learning system that is able to provide fine-grained feedback on location, movement and hand-shape to learners of ASL. In addition, we also propose a waterfall architecture for combining the sub-modules to prevent cognitive overload for learners and to reduce computation time for feedback. The system has an overall test accuracy of 87.9 {\%} on real-world data consisting of 25 signs with 3 repetitions each from 100 learners.",
keywords = "Computer-aided learning, Explainable AI, Sign language learning",
author = "Prajwal Paudyal and Junghyo Lee and Azamat Kamzin and Mohamad Soudki and Ayan Banerjee and Sandeep Gupta",
year = "2019",
month = "1",
day = "1",
language = "English (US)",
volume = "2327",
journal = "CEUR Workshop Proceedings",
issn = "1613-0073",

}

TY - JOUR

T1 - Learn2Sign

T2 - Explainable AI for sign language learning

AU - Paudyal, Prajwal

AU - Lee, Junghyo

AU - Kamzin, Azamat

AU - Soudki, Mohamad

AU - Banerjee, Ayan

AU - Gupta, Sandeep

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Languages are best learned in immersive environments with rich feedback. This is specially true for signed languages due to their visual and poly-componential nature. Computer Aided Language Learning (CALL) solutions successfully incorporate feedback for spoken languages, but no such solution exists for signed languages. Current Sign Language Recognition (SLR) systems are not interpretable and hence not applicable to provide feedback to learners. In this work, we propose a modular and explainable machine learning system that is able to provide fine-grained feedback on location, movement and hand-shape to learners of ASL. In addition, we also propose a waterfall architecture for combining the sub-modules to prevent cognitive overload for learners and to reduce computation time for feedback. The system has an overall test accuracy of 87.9 % on real-world data consisting of 25 signs with 3 repetitions each from 100 learners.

AB - Languages are best learned in immersive environments with rich feedback. This is specially true for signed languages due to their visual and poly-componential nature. Computer Aided Language Learning (CALL) solutions successfully incorporate feedback for spoken languages, but no such solution exists for signed languages. Current Sign Language Recognition (SLR) systems are not interpretable and hence not applicable to provide feedback to learners. In this work, we propose a modular and explainable machine learning system that is able to provide fine-grained feedback on location, movement and hand-shape to learners of ASL. In addition, we also propose a waterfall architecture for combining the sub-modules to prevent cognitive overload for learners and to reduce computation time for feedback. The system has an overall test accuracy of 87.9 % on real-world data consisting of 25 signs with 3 repetitions each from 100 learners.

KW - Computer-aided learning

KW - Explainable AI

KW - Sign language learning

UR - http://www.scopus.com/inward/record.url?scp=85063227123&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85063227123&partnerID=8YFLogxK

M3 - Conference article

VL - 2327

JO - CEUR Workshop Proceedings

JF - CEUR Workshop Proceedings

SN - 1613-0073

ER -