Abstract
Quantitatively characterizing a locomotion performance objective for a human-robot system is an important consideration in the assistive wearable robot design towards human-robot symbiosis. This problem, however, has only been addressed sparsely in the literature. In this study, we propose a new inverse approach from observed human-robot walking behavior to infer a human-robot collective performance objective represented in a quadratic form. By an innovative design of human experiments and simulation study, respectively, we validated the effectiveness of two solution approaches to solving the inverse problem using inverse reinforcement learning (IRL) and inverse optimal control (IOC). The IRL-based experiments of human walking with robotic transfemoral prosthesis validated the realistic applicability of the proposed inverse approach, while the IOC-based analysis provided important human-robot system properties such as stability and robustness that are difficult to obtain from human experiments. This study introduces a new tool to the field of wearable lower limb robots. It is expected to be expandable to quantify joint human-robot locomotion performance objectives for personalizing wearable robot control in the future.
Original language | English (US) |
---|---|
Pages (from-to) | 2549-2556 |
Number of pages | 8 |
Journal | IEEE Robotics and Automation Letters |
Volume | 7 |
Issue number | 2 |
DOIs | |
State | Published - Apr 1 2022 |
Keywords
- Learning from demonstration
- reinforcement learning
- wearable robotics
ASJC Scopus subject areas
- Control and Systems Engineering
- Biomedical Engineering
- Human-Computer Interaction
- Mechanical Engineering
- Computer Vision and Pattern Recognition
- Computer Science Applications
- Control and Optimization
- Artificial Intelligence