Abstract

Continuum robots can enter narrow spaces and are useful for search and rescue missions in disaster sites. The exploration efficiency at disaster sites improves if the robots can simultaneously acquire several pieces of information. However, a continuum robot that can simultaneously acquire information to such an extent has not yet been designed. This is because attaching multiple sensors to the robot without compromising its body flexibility is challenging. In this study, we installed multiple small sensors in a distributed manner to develop a continuum-robot system with multiple information-gathering functions. In addition, a field experiment with the robot demonstrated that the gathered multiple information has a potential to improve the searching efficiency. Concretely, we developed an active scope camera with sensory functions, which was equipped with a total of 80 distributed sensors, such as inertial measurement units, microphones, speakers, and vibration sensors. Herein, we consider space-saving, noise reduction, and the ease of maintenance for designing the robot. The developed robot can communicate with all the attached sensors even if it is bent with a minimum bending radius of 250 mm. We also developed an operation interface that integrates search-support technologies using the information gathered via sensors. We demonstrated the survivor search procedure in a simulated rubble environment of the Fukushima Robot Test Field. We confirmed that the information provided through the operation interface is useful for searching and finding survivors. The limitations of the designed system are also discussed. The development of such a continuum robot system, with a great potential for several applications, extends the application of continuum robots to disaster management and will benefit the community at large.

Details

Title
Development of a continuum robot enhanced with distributed sensors for search and rescue
Author
Yamauchi, Yu 1   VIAFID ORCID Logo  ; Ambe Yuichi 2 ; Nagano Hikaru 3 ; Konyo Masashi 1 ; Bando Yoshiaki 4 ; Ito Eisuke 1 ; Arnold Solvi 5 ; Yamazaki Kimitoshi 5 ; Itoyama Katsutoshi 6 ; Okatani Takayuki 1 ; Okuno, Hiroshi G 7 ; Tadokoro Satoshi 1 

 Tohoku University, Graduate School of Information Sciences, Miyagi, Japan (GRID:grid.69566.3a) (ISNI:0000 0001 2248 6943) 
 Tohoku University, Tough Cyberphysical AI Research Center, Miyagi, Japan (GRID:grid.69566.3a) (ISNI:0000 0001 2248 6943) 
 Kobe University, Graduate School of Engineering, Hyogo, Japan (GRID:grid.31432.37) (ISNI:0000 0001 1092 3077) 
 Artificial Intelligence Research Center (AIRC), National Institute of Advanced Industrial Science and Technology (AIST), Tokyo, Japan (GRID:grid.208504.b) (ISNI:0000 0001 2230 7538) 
 Shinshu University, Mechanical Systems Engineering, Nagano, Japan (GRID:grid.263518.b) (ISNI:0000 0001 1507 4692) 
 Tokyo Institute of Technology, Graduate School of Information Science and Engineering, Tokyo, Japan (GRID:grid.32197.3e) (ISNI:0000 0001 2179 2105) 
 Waseda University, Institue of Human Robot Co-Creation, Tokyo, Japan (GRID:grid.5290.e) (ISNI:0000 0004 1936 9975) 
Publication year
2022
Publication date
Dec 2022
Publisher
Springer Nature B.V.
e-ISSN
21974225
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2641230232
Copyright
© The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.