It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Continuum robots can enter narrow spaces and are useful for search and rescue missions in disaster sites. The exploration efficiency at disaster sites improves if the robots can simultaneously acquire several pieces of information. However, a continuum robot that can simultaneously acquire information to such an extent has not yet been designed. This is because attaching multiple sensors to the robot without compromising its body flexibility is challenging. In this study, we installed multiple small sensors in a distributed manner to develop a continuum-robot system with multiple information-gathering functions. In addition, a field experiment with the robot demonstrated that the gathered multiple information has a potential to improve the searching efficiency. Concretely, we developed an active scope camera with sensory functions, which was equipped with a total of 80 distributed sensors, such as inertial measurement units, microphones, speakers, and vibration sensors. Herein, we consider space-saving, noise reduction, and the ease of maintenance for designing the robot. The developed robot can communicate with all the attached sensors even if it is bent with a minimum bending radius of 250 mm. We also developed an operation interface that integrates search-support technologies using the information gathered via sensors. We demonstrated the survivor search procedure in a simulated rubble environment of the Fukushima Robot Test Field. We confirmed that the information provided through the operation interface is useful for searching and finding survivors. The limitations of the designed system are also discussed. The development of such a continuum robot system, with a great potential for several applications, extends the application of continuum robots to disaster management and will benefit the community at large.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details

1 Tohoku University, Graduate School of Information Sciences, Miyagi, Japan (GRID:grid.69566.3a) (ISNI:0000 0001 2248 6943)
2 Tohoku University, Tough Cyberphysical AI Research Center, Miyagi, Japan (GRID:grid.69566.3a) (ISNI:0000 0001 2248 6943)
3 Kobe University, Graduate School of Engineering, Hyogo, Japan (GRID:grid.31432.37) (ISNI:0000 0001 1092 3077)
4 Artificial Intelligence Research Center (AIRC), National Institute of Advanced Industrial Science and Technology (AIST), Tokyo, Japan (GRID:grid.208504.b) (ISNI:0000 0001 2230 7538)
5 Shinshu University, Mechanical Systems Engineering, Nagano, Japan (GRID:grid.263518.b) (ISNI:0000 0001 1507 4692)
6 Tokyo Institute of Technology, Graduate School of Information Science and Engineering, Tokyo, Japan (GRID:grid.32197.3e) (ISNI:0000 0001 2179 2105)
7 Waseda University, Institue of Human Robot Co-Creation, Tokyo, Japan (GRID:grid.5290.e) (ISNI:0000 0004 1936 9975)