Sim-to-Real Transfer of Soft Robotic Navigation Strategies That Learns from the Virtual Eye-in-Hand Vision - Simulation Open Framework Architecture Access content directly
Journal Articles IEEE Transactions on Industrial Informatics Year : 2024

Sim-to-Real Transfer of Soft Robotic Navigation Strategies That Learns from the Virtual Eye-in-Hand Vision

Abstract

To steer a soft robot precisely in an unconstructed environment with minimal collision remains an open challenge for soft robots. When the environments are unknown, prior motion planning for navigation may not always be available. This article presents a novel Sim-to-Real method to guide a cable-driven soft robot in a static environment under the Simulation Open Framework Architecture (SOFA). The scenario aims to resemble one of the steps during a simplified transoral tracheal intubation process where a robotic endotracheal tube is guided to the upper trachea–larynx location by a flexible video-assisted endoscope/stylet. In SOFA, we employ the quadratic programming inverse solver to obtain collision-free motion strategies for the endoscope/stylet manipulation based on the robot model and encode the virtual eye-in-hand vision. Then, we associate the anatomical features recognized by the virtual vision and the joint space motion using a closed-loop nonlinear autoregressive exogenous model (NARX) network. Afterward, we transfer the learned knowledge to the robot prototype, expecting it to navigate to the desired spot in a new phantom environment automatically based on its eye-in-hand vision only. Experiment results indicate that our soft robot can efficaciously navigate through the unstructured phantom to the desired spot with minimal collision motion according to what it has learned from the virtual environment. The results show that the average R-squared coefficient between the closed-loop NARX-forecasted and SOFA-referenced robot's cable and prismatic joint space motion are 0.963 and 0.997, respectively. The eye-in-hand visions also demonstrate a good alignment between the robot tip and the glottis.
Fichier principal
Vignette du fichier
tii-articles-template.pdf (16.71 Mo) Télécharger le fichier
Origin : Publisher files allowed on an open archive
licence : CC BY NC ND - Attribution - NonCommercial - NoDerivatives

Dates and versions

hal-04538063 , version 1 (09-04-2024)

Licence

Attribution - NonCommercial - NoDerivatives

Identifiers

Cite

Jiewen Lai, Tian-Ao Ren, Wenchao Yue, Shijian Su, Jason Y. K. Chan, et al.. Sim-to-Real Transfer of Soft Robotic Navigation Strategies That Learns from the Virtual Eye-in-Hand Vision. IEEE Transactions on Industrial Informatics, 2024, 20 (2), pp.2365-2377. ⟨10.1109/TII.2023.3291699⟩. ⟨hal-04538063⟩

Collections

SOFA TDS-MACS
0 View
0 Download

Altmetric

Share

Gmail Facebook X LinkedIn More