Content area

Abstract

The internal structure of buildings is becoming increasingly complex. Providing a scientific and reasonable evacuation route for trapped persons in a complex indoor environment is important for reducing casualties and property losses. In emergency and disaster relief environments, indoor path planning has great uncertainty and higher safety requirements. Q-learning is a value-based reinforcement learning algorithm that can complete path planning tasks through autonomous learning without establishing mathematical models and environmental maps. Therefore, we propose an indoor emergency path planning method based on the Q-learning optimization algorithm. First, a grid environment model is established. The discount rate of the exploration factor is used to optimize the Q-learning algorithm, and the exploration factor in the ε-greedy strategy is dynamically adjusted before selecting random actions to accelerate the convergence of the Q-learning algorithm in a large-scale grid environment. An indoor emergency path planning experiment based on the Q-learning optimization algorithm was carried out using simulated data and real indoor environment data. The proposed Q-learning optimization algorithm basically converges after 500 iterative learning rounds, which is nearly 2000 rounds higher than the convergence rate of the Q-learning algorithm. The SASRA algorithm has no obvious convergence trend in 5000 iterations of learning. The results show that the proposed Q-learning optimization algorithm is superior to the SARSA algorithm and the classic Q-learning algorithm in terms of solving time and convergence speed when planning the shortest path in a grid environment. The convergence speed of the proposed Q- learning optimization algorithm is approximately five times faster than that of the classic Q- learning algorithm. The proposed Q-learning optimization algorithm in the grid environment can successfully plan the shortest path to avoid obstacle areas in a short time.

Details

1009240
Business indexing term
Title
Indoor Emergency Path Planning Based on the Q-Learning Optimization Algorithm
Author
Xu, Shenghua 1 ; Gu, Yang 2 ; Li, Xiaoyan 3 ; Chen, Cai 4 ; Hu, Yingyi 4 ; Yu, Sang 4 ; Jiang, Wenxing 5 

 Chinese Academy of Surveying and Mapping, Beijing 100830, China; [email protected] 
 Nantong Export-Oriented Agricultural Comprehensive Development Zone, Nantong 226000, China 
 School of Geomatics, Liaoning Technical University, Fuxin 123008, China; [email protected] 
 School of Marine Technology and Geomatics, Jiangsu Ocean University, Lianyungang 222005, China; [email protected] (C.C.); [email protected] (Y.H.); [email protected] (Y.S.) 
 Faculty of Geosciences and Environment Engineering, Southwest Jiontong University, Chengdu 611756, China; [email protected] 
Volume
11
Issue
1
First page
66
Publication year
2022
Publication date
2022
Publisher
MDPI AG
Place of publication
Basel
Country of publication
Switzerland
Publication subject
e-ISSN
22209964
Source type
Scholarly Journal
Language of publication
English
Document type
Journal Article
Publication history
 
 
Online publication date
2022-01-14
Milestone dates
2021-10-21 (Received); 2022-01-12 (Accepted)
Publication history
 
 
   First posting date
14 Jan 2022
ProQuest document ID
2621283696
Document URL
https://www.proquest.com/scholarly-journals/indoor-emergency-path-planning-based-on-q/docview/2621283696/se-2?accountid=208611
Copyright
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Last updated
2025-04-29
Database
ProQuest One Academic