TY - GEN
T1 - Dynamic modeling and motion control of a soft robotic arm segment
AU - Qiao, Zhi
AU - Nguyen, Pham H.
AU - Polygerinos, Panagiotis
AU - Zhang, Wenlong
N1 - Funding Information:
This work was supported in part by the National Science Foundation under Grant CMMI-1800940.
Publisher Copyright:
© 2019 American Automatic Control Council.
PY - 2019/7
Y1 - 2019/7
N2 - Soft robotics has shown great potential in manipulation and human-robot interaction due to its compliant nature. However, soft systems usually have a large degree of freedom and strong nonlinearities, which pose significant challenges for precise modeling and control. In this paper, a linear parameter-varying (LPV) model is developed to describe the dynamics of a soft robotic arm segment. Given the different actuation mechanisms, the LPV models for elongation and bending motions are identified through experimental data. A state-feedback H{infty} controller is designed for the LPV model using a linear matrix inequality (LMI). Simulation of the state-feedback controller indicates that the closed-loop system is stable but with steady-state errors. As a result, an iterative learning control (ILC) with P-type learning function is implemented to improve the tracking performance. Simulation results of the ILC+state-feedback controller show steady-state errors are significantly reduced with iterations. The ILCs+state-feedback controller successfully moves the soft robotic arm segment to its desired position within several iterations in experiments.
AB - Soft robotics has shown great potential in manipulation and human-robot interaction due to its compliant nature. However, soft systems usually have a large degree of freedom and strong nonlinearities, which pose significant challenges for precise modeling and control. In this paper, a linear parameter-varying (LPV) model is developed to describe the dynamics of a soft robotic arm segment. Given the different actuation mechanisms, the LPV models for elongation and bending motions are identified through experimental data. A state-feedback H{infty} controller is designed for the LPV model using a linear matrix inequality (LMI). Simulation of the state-feedback controller indicates that the closed-loop system is stable but with steady-state errors. As a result, an iterative learning control (ILC) with P-type learning function is implemented to improve the tracking performance. Simulation results of the ILC+state-feedback controller show steady-state errors are significantly reduced with iterations. The ILCs+state-feedback controller successfully moves the soft robotic arm segment to its desired position within several iterations in experiments.
UR - http://www.scopus.com/inward/record.url?scp=85072284929&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85072284929&partnerID=8YFLogxK
U2 - 10.23919/acc.2019.8815212
DO - 10.23919/acc.2019.8815212
M3 - Conference contribution
AN - SCOPUS:85072284929
T3 - Proceedings of the American Control Conference
SP - 5438
EP - 5443
BT - 2019 American Control Conference, ACC 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 American Control Conference, ACC 2019
Y2 - 10 July 2019 through 12 July 2019
ER -