TY - GEN
T1 - SCEPTRE
T2 - 21st International Conference on Intelligent User Interfaces, IUI 2016
AU - Paudyal, Prajwal
AU - Banerjee, Ayan
AU - Gupta, Sandeep
N1 - Publisher Copyright:
© Copyright 2016 ACM.
PY - 2016/3/7
Y1 - 2016/3/7
N2 - Communication and collaboration between deaf people and hearing people is hindered by lack of a common language. Although there has been a lot of research in this domain, there is room for work towards a system that is ubiquitous, non-invasive, works in real-time and can be trained interactively by the user. Such a system will be powerful enough to translate gestures performed in real-time, while also being flexible enough to be fully personalized to be used as a platform for gesture based HCI. We propose SCEPTRE which utilizes two non-invasive wrist-worn devices to decipher gesture-based communication. The system uses a multitiered template based comparison system for classification on input data from accelerometer, gyroscope and electromyography (EMG) sensors. This work demonstrates that the system is very easily trained using just one to three training instances each for twenty randomly chosen signs from the American Sign Language(ASL) dictionary and also for user-generated custom gestures. The system is able to achieve an accuracy of 97.72 % for ASL gestures.
AB - Communication and collaboration between deaf people and hearing people is hindered by lack of a common language. Although there has been a lot of research in this domain, there is room for work towards a system that is ubiquitous, non-invasive, works in real-time and can be trained interactively by the user. Such a system will be powerful enough to translate gestures performed in real-time, while also being flexible enough to be fully personalized to be used as a platform for gesture based HCI. We propose SCEPTRE which utilizes two non-invasive wrist-worn devices to decipher gesture-based communication. The system uses a multitiered template based comparison system for classification on input data from accelerometer, gyroscope and electromyography (EMG) sensors. This work demonstrates that the system is very easily trained using just one to three training instances each for twenty randomly chosen signs from the American Sign Language(ASL) dictionary and also for user-generated custom gestures. The system is able to achieve an accuracy of 97.72 % for ASL gestures.
KW - Assistive technology
KW - Gesture-based interfaces
KW - Sign language processing
KW - Wearable and pervasive computing
UR - http://www.scopus.com/inward/record.url?scp=84963731962&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84963731962&partnerID=8YFLogxK
U2 - 10.1145/2856767.2856794
DO - 10.1145/2856767.2856794
M3 - Conference contribution
AN - SCOPUS:84963731962
SN - 9781450341370
T3 - International Conference on Intelligent User Interfaces, Proceedings IUI
SP - 282
EP - 293
BT - Proceedings of the 21st International Conference on Intelligent User Interfaces
PB - Association for Computing Machinery
Y2 - 7 March 2016 through 10 March 2016
ER -