TY - GEN
T1 - Learning efficient control of robots using myoelectric interfaces
AU - Ison, Mark
AU - Antuvan, Chris Wilson
AU - Artemiadis, Panagiotis
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2014/9/22
Y1 - 2014/9/22
N2 - Myoelectric controlled interfaces are a vital component for advancing applications in prostheses, exoskeletons, and robot teleoperation. Current methods search for optimal neural decoders for enhanced initial user performance. However, recent studies demonstrate learning an inverse model of abstract decoders to improve performance over time. This paper proposes a paradigm shift on myoelectric interfaces by embedding the human as controller of a system and allowing the human to learn how to control it via control tasks with similar mapping functions. The method is tested using two different control tasks and four different abstract mappings of upper limb myoelectric signals to control actions for those tasks. The results confirm that all subjects are able to learn the mappings and improve performance efficiency over time. A cross-trial evaluation reveals a significant learning transfer when a new control task is presented using the same mapping as a previous task, resulting in enhanced initial performance with the new task. Comparison of EMG signal evolution across subjects indicates a significant population-wide muscle synergy development that results from learning and implementing the inverse model of the mapping function to complete the tasks. This suggests that efficient performance may be achieved by learning a constant, arbitrary mapping function applied to multiple control tasks rather than dynamic subject- or task-specific functions. Moreover, this method can be used for the neural control of any device or robot, without restricting them to anthropomorphic or human-related counterparts.
AB - Myoelectric controlled interfaces are a vital component for advancing applications in prostheses, exoskeletons, and robot teleoperation. Current methods search for optimal neural decoders for enhanced initial user performance. However, recent studies demonstrate learning an inverse model of abstract decoders to improve performance over time. This paper proposes a paradigm shift on myoelectric interfaces by embedding the human as controller of a system and allowing the human to learn how to control it via control tasks with similar mapping functions. The method is tested using two different control tasks and four different abstract mappings of upper limb myoelectric signals to control actions for those tasks. The results confirm that all subjects are able to learn the mappings and improve performance efficiency over time. A cross-trial evaluation reveals a significant learning transfer when a new control task is presented using the same mapping as a previous task, resulting in enhanced initial performance with the new task. Comparison of EMG signal evolution across subjects indicates a significant population-wide muscle synergy development that results from learning and implementing the inverse model of the mapping function to complete the tasks. This suggests that efficient performance may be achieved by learning a constant, arbitrary mapping function applied to multiple control tasks rather than dynamic subject- or task-specific functions. Moreover, this method can be used for the neural control of any device or robot, without restricting them to anthropomorphic or human-related counterparts.
UR - http://www.scopus.com/inward/record.url?scp=84907369616&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84907369616&partnerID=8YFLogxK
U2 - 10.1109/ICRA.2014.6907273
DO - 10.1109/ICRA.2014.6907273
M3 - Conference contribution
AN - SCOPUS:84907369616
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 2880
EP - 2885
BT - Proceedings - IEEE International Conference on Robotics and Automation
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2014 IEEE International Conference on Robotics and Automation, ICRA 2014
Y2 - 31 May 2014 through 7 June 2014
ER -