Abstract

Accurately predicting submarine positions is critical to ensure safe underwater navigation, especially in complex and dynamic marine environments. Traditional methods, such as the Kalman Filter, have been widely used in this area because of their ability to provide optimal state estimates by integrating dynamic models with sensor measurements. However, the Kalman Filter’s effectiveness diminishes in the face of the diverse and non-linear conditions present in underwater environments. This research proposes an enhanced prediction model that combines the Kalman Filter with Long Short-Term Memory (LSTM) neural networks to address these limitations. The proposed model leverages the strengths of both approaches: the recursive, real-time estimation capabilities of the Kalman Filter and the temporal dependency handling of LSTM networks. Through extensive simulations, the model performs better in predicting the three-dimensional trajectories of submersibles, accounting for uncertainties such as ocean currents and varying environmental conditions. The results indicate that this hybrid approach improves prediction accuracy and enhances adaptability and robustness in challenging marine scenarios. This study contributes to developing more reliable submersible positioning systems with potential applications in deep-sea exploration, underwater archaeology, and search and rescue operations.

Details

Title
Research on multi-submersible positioning prediction based on Kalman filter and LSTM neural network
Author
Miao, Yajie; Boyi Li; Guo, Xinjing
First page
012026
Publication year
2024
Publication date
Nov 2024
Publisher
IOP Publishing
ISSN
17426588
e-ISSN
17426596
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3133731114
Copyright
Published under licence by IOP Publishing Ltd. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.