Content area
This paper presents a visual active SLAM method considering measurement and state uncertainty for space exploration in urban search and rescue environments. An uncertainty evaluation method based on the Fisher Information Matrix (FIM) is studied from the perspective of evaluating the localization uncertainty of SLAM systems. With the aid of the Fisher Information Matrix, the Cramér–Rao Lower Bound (CRLB) of the pose uncertainty in the stereo visual SLAM system is derived to describe the boundary of the pose uncertainty. Optimality criteria are introduced to quantitatively evaluate the localization uncertainty. The odometry information selection method and the local bundle adjustment information selection method based on Fisher Information are proposed to find out the measurements with low uncertainty for localization and mapping in the search and rescue process. By adopting the method above, the computing efficiency of the system is improved while the localization accuracy is equivalent to the classical ORB-SLAM2. Moreover, by the quantified uncertainty of local poses and map points, the generalized unary node and generalized unary edge are defined to improve the computational efficiency in computing local state uncertainty. In addition, an active loop closing planner considering local state uncertainty is proposed to make use of uncertainty in assisting the space exploration and decision-making of MAV, which is beneficial to the improvement of MAV localization performance in search and rescue environments. Simulations and field tests in different challenging scenarios are conducted to verify the effectiveness of the proposed method.
Details
Lower bounds;
Simultaneous localization and mapping;
Space exploration;
Accuracy;
Cramer-Rao bounds;
Evacuations & rescues;
Optimization;
Robots;
Optimality criteria;
Localization;
Search and rescue;
Uncertainty;
Efficiency;
Cameras;
Planning;
Graph representations;
Decision making;
Searching;
Algorithms;
Fisher information;
Computing time;
Bundle adjustment
; Xiong Zhi 2
; Wang, Jingqi 2
; Zhang, Lin 1 ; Campoy Pascual 3
1 School of Electrical and Information Engineering, Jiangsu University of Technology, Changzhou 213001, China; [email protected]
2 College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; [email protected] (Z.X.); [email protected] (J.W.)
3 Computer Vision and Aerial Robotics Group, Universidad Politécnica de Madrid, 28006 Madrid, Spain; [email protected]