Generation of movement explanations for testing gesture based co-operative learning applications

Ayan Banerjee, Imane Lamrani, Prajwal Paudyal, Sandeep Gupta

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The paper proposes an explanation framework for machine learning based gesture recognition systems to increase trust, and also provide users an interface to ask questions about the recognition result. Gestures have three components: a) handshape, b) location, and c) movement. Several techniques exist for handshape and location recognition explanation, but very limited analysis exists on explaining movement. The challenge is that modeling the movement between handshapes in a gesture require dynamic modeling of arm kinematics using differential equations. The arm models can be of various complexity, but many of them may not be explainable. Our approach in this paper, is to mine hybrid system models of gestures using a coalition of hand-shape recognition technology and explainable kinematic models. The hybrid dynamical systems are mined using video data collected from users. Change in dynamics of a test user is expressed using the parameters of the kinematic equations. The parameters are converted into human understandable explanations by experts in movement analysis. The novel outcome is the combination of fault detection in hybrid dynamical systems and machine learning to provide explanation for recognition of continuous events. We have applied our technique on 60 users for 20 ASL gestures. Results show that the mined parameters of the kinematic equations can represent each gesture with precision of 83 %, recall of 80 % and accuracy of 82 %.

Original languageEnglish (US)
Title of host publicationProceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages9-16
Number of pages8
ISBN (Electronic)9781728104928
DOIs
StatePublished - May 17 2019
Event1st IEEE International Conference on Artificial Intelligence Testing, AITest 2019 - Newark, United States
Duration: Apr 4 2019Apr 9 2019

Publication series

NameProceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019

Conference

Conference1st IEEE International Conference on Artificial Intelligence Testing, AITest 2019
CountryUnited States
CityNewark
Period4/4/194/9/19

Fingerprint

Kinematics
Testing
Learning systems
Dynamical systems
Gesture recognition
Hybrid systems
Fault detection
User interfaces
Differential equations

Keywords

  • Artificial intelligence
  • Explainable AI
  • Gesture recognition
  • Testing

ASJC Scopus subject areas

  • Artificial Intelligence
  • Safety, Risk, Reliability and Quality

Cite this

Banerjee, A., Lamrani, I., Paudyal, P., & Gupta, S. (2019). Generation of movement explanations for testing gesture based co-operative learning applications. In Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019 (pp. 9-16). [8718241] (Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/AITest.2019.00-15

Generation of movement explanations for testing gesture based co-operative learning applications. / Banerjee, Ayan; Lamrani, Imane; Paudyal, Prajwal; Gupta, Sandeep.

Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019. Institute of Electrical and Electronics Engineers Inc., 2019. p. 9-16 8718241 (Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Banerjee, A, Lamrani, I, Paudyal, P & Gupta, S 2019, Generation of movement explanations for testing gesture based co-operative learning applications. in Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019., 8718241, Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019, Institute of Electrical and Electronics Engineers Inc., pp. 9-16, 1st IEEE International Conference on Artificial Intelligence Testing, AITest 2019, Newark, United States, 4/4/19. https://doi.org/10.1109/AITest.2019.00-15
Banerjee A, Lamrani I, Paudyal P, Gupta S. Generation of movement explanations for testing gesture based co-operative learning applications. In Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019. Institute of Electrical and Electronics Engineers Inc. 2019. p. 9-16. 8718241. (Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019). https://doi.org/10.1109/AITest.2019.00-15
Banerjee, Ayan ; Lamrani, Imane ; Paudyal, Prajwal ; Gupta, Sandeep. / Generation of movement explanations for testing gesture based co-operative learning applications. Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 9-16 (Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019).
@inproceedings{76dcafc64cf4470d97033cc14d701a20,
title = "Generation of movement explanations for testing gesture based co-operative learning applications",
abstract = "The paper proposes an explanation framework for machine learning based gesture recognition systems to increase trust, and also provide users an interface to ask questions about the recognition result. Gestures have three components: a) handshape, b) location, and c) movement. Several techniques exist for handshape and location recognition explanation, but very limited analysis exists on explaining movement. The challenge is that modeling the movement between handshapes in a gesture require dynamic modeling of arm kinematics using differential equations. The arm models can be of various complexity, but many of them may not be explainable. Our approach in this paper, is to mine hybrid system models of gestures using a coalition of hand-shape recognition technology and explainable kinematic models. The hybrid dynamical systems are mined using video data collected from users. Change in dynamics of a test user is expressed using the parameters of the kinematic equations. The parameters are converted into human understandable explanations by experts in movement analysis. The novel outcome is the combination of fault detection in hybrid dynamical systems and machine learning to provide explanation for recognition of continuous events. We have applied our technique on 60 users for 20 ASL gestures. Results show that the mined parameters of the kinematic equations can represent each gesture with precision of 83 {\%}, recall of 80 {\%} and accuracy of 82 {\%}.",
keywords = "Artificial intelligence, Explainable AI, Gesture recognition, Testing",
author = "Ayan Banerjee and Imane Lamrani and Prajwal Paudyal and Sandeep Gupta",
year = "2019",
month = "5",
day = "17",
doi = "10.1109/AITest.2019.00-15",
language = "English (US)",
series = "Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "9--16",
booktitle = "Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019",

}

TY - GEN

T1 - Generation of movement explanations for testing gesture based co-operative learning applications

AU - Banerjee, Ayan

AU - Lamrani, Imane

AU - Paudyal, Prajwal

AU - Gupta, Sandeep

PY - 2019/5/17

Y1 - 2019/5/17

N2 - The paper proposes an explanation framework for machine learning based gesture recognition systems to increase trust, and also provide users an interface to ask questions about the recognition result. Gestures have three components: a) handshape, b) location, and c) movement. Several techniques exist for handshape and location recognition explanation, but very limited analysis exists on explaining movement. The challenge is that modeling the movement between handshapes in a gesture require dynamic modeling of arm kinematics using differential equations. The arm models can be of various complexity, but many of them may not be explainable. Our approach in this paper, is to mine hybrid system models of gestures using a coalition of hand-shape recognition technology and explainable kinematic models. The hybrid dynamical systems are mined using video data collected from users. Change in dynamics of a test user is expressed using the parameters of the kinematic equations. The parameters are converted into human understandable explanations by experts in movement analysis. The novel outcome is the combination of fault detection in hybrid dynamical systems and machine learning to provide explanation for recognition of continuous events. We have applied our technique on 60 users for 20 ASL gestures. Results show that the mined parameters of the kinematic equations can represent each gesture with precision of 83 %, recall of 80 % and accuracy of 82 %.

AB - The paper proposes an explanation framework for machine learning based gesture recognition systems to increase trust, and also provide users an interface to ask questions about the recognition result. Gestures have three components: a) handshape, b) location, and c) movement. Several techniques exist for handshape and location recognition explanation, but very limited analysis exists on explaining movement. The challenge is that modeling the movement between handshapes in a gesture require dynamic modeling of arm kinematics using differential equations. The arm models can be of various complexity, but many of them may not be explainable. Our approach in this paper, is to mine hybrid system models of gestures using a coalition of hand-shape recognition technology and explainable kinematic models. The hybrid dynamical systems are mined using video data collected from users. Change in dynamics of a test user is expressed using the parameters of the kinematic equations. The parameters are converted into human understandable explanations by experts in movement analysis. The novel outcome is the combination of fault detection in hybrid dynamical systems and machine learning to provide explanation for recognition of continuous events. We have applied our technique on 60 users for 20 ASL gestures. Results show that the mined parameters of the kinematic equations can represent each gesture with precision of 83 %, recall of 80 % and accuracy of 82 %.

KW - Artificial intelligence

KW - Explainable AI

KW - Gesture recognition

KW - Testing

UR - http://www.scopus.com/inward/record.url?scp=85067127184&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85067127184&partnerID=8YFLogxK

U2 - 10.1109/AITest.2019.00-15

DO - 10.1109/AITest.2019.00-15

M3 - Conference contribution

AN - SCOPUS:85067127184

T3 - Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019

SP - 9

EP - 16

BT - Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019

PB - Institute of Electrical and Electronics Engineers Inc.

ER -