Abstract

Object recognition is among the basic survival skills of human beings and other animals. To date, artificial intelligence (AI) assisted high-performance object recognition is primarily visual-based, empowered by the rapid development of sensing and computational capabilities. Here, we report a tactile-olfactory sensing array, which was inspired by the natural sense-fusion system of star-nose mole, and can permit real-time acquisition of the local topography, stiffness, and odor of a variety of objects without visual input. The tactile-olfactory information is processed by a bioinspired olfactory-tactile associated machine-learning algorithm, essentially mimicking the biological fusion procedures in the neural system of the star-nose mole. Aiming to achieve human identification during rescue missions in challenging environments such as dark or buried scenarios, our tactile-olfactory intelligent sensing system could classify 11 typical objects with an accuracy of 96.9% in a simulated rescue scenario at a fire department test site. The tactile-olfactory bionic sensing system required no visual input and showed superior tolerance to environmental interference, highlighting its great potential for robust object recognition in difficult environments where other methods fall short.

For the object recognition in lightless environments, the authors propose the olfactory-tactile machine learning approach, inspired by the star-nose mole’s neural system. They show how bionic flexible sensor arrays allow for real-time acquisition of object’s form and odor when touching it.

Details

Title
A star-nose-like tactile-olfactory bionic sensing array for robust object recognition in non-visual environments
Author
Liu Mengwei 1 ; Zhang, Yujia 1 ; Wang Jiachuang 1 ; Qin Nan 2 ; Yang, Heng 1 ; Sun, Ke 1 ; Hao Jie 3 ; Lin, Shu 3 ; Liu Jiarui 3 ; Chen, Qiang 4 ; Zhang, Pingping 5 ; Tao, Tiger H 6   VIAFID ORCID Logo 

 State Key Laboratory of Transducer Technology, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, China (GRID:grid.511175.6); School of Graduate Study, University of Chinese Academy of Sciences, Beijing, China (GRID:grid.410726.6) (ISNI:0000 0004 1797 8419) 
 State Key Laboratory of Transducer Technology, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, China (GRID:grid.511175.6) 
 Institute of Automation, Chinese Academy of Sciences, Beijing, China (GRID:grid.429126.a) (ISNI:0000 0004 0644 477X) 
 Shanghai Fire Research Institute of MEM, Shanghai, China (GRID:grid.495486.2) (ISNI:0000 0004 0604 7635) 
 Suzhou Huiwen Nanotechnology Co., Ltd, Suzhou, China (GRID:grid.495486.2) 
 State Key Laboratory of Transducer Technology, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, China (GRID:grid.511175.6); School of Graduate Study, University of Chinese Academy of Sciences, Beijing, China (GRID:grid.410726.6) (ISNI:0000 0004 1797 8419); Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing, China (GRID:grid.410726.6) (ISNI:0000 0004 1797 8419); 2020 X-Lab, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, China (GRID:grid.458459.1) (ISNI:0000 0004 1792 5798); School of Physical Science and Technology, ShanghaiTech University, Shanghai, China (GRID:grid.440637.2) (ISNI:0000 0004 4657 8879); Institute of Brain-Intelligence Technology, Zhangjiang Laboratory, Shanghai, China (GRID:grid.510564.3); Shanghai Research Center for Brain Science and Brain-Inspired Intelligence, Shanghai, China (GRID:grid.511008.d); Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China (GRID:grid.507732.4) 
Publication year
2022
Publication date
2022
Publisher
Nature Publishing Group
e-ISSN
20411723
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2619578358
Copyright
© The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.