Content area

Abstract

Current methods for underwater image enhancement primarily focus on single-frame processing. While these approaches achieve impressive results for static images, they often fail to maintain temporal coherence across frames in underwater videos, which leads to temporal artifacts and frame flickering. Furthermore, existing enhancement methods struggle to accurately capture features in underwater scenes. This makes it difficult to handle challenges such as uneven lighting and edge blurring in complex underwater environments. To address these issues, this paper presents a dual-branch underwater video enhancement network. The network synthesizes short-range video sequences by learning and inferring optical flow from individual frames. It effectively enhances temporal consistency across video frames through predicted optical flow information, thereby mitigating temporal instability within frame sequences. In addition, to address the limitations of traditional U-Net models in handling complex multiscale feature fusion, this study proposes a novel underwater feature fusion module. By applying both max pooling and average pooling, this module separately extracts local and global features. It utilizes an attention mechanism to adaptively adjust the weights of different regions in the feature map, thereby effectively enhancing key regions within underwater video frames. Experimental results indicate that when compared with the existing underwater image enhancement baseline method and the consistency enhancement baseline method, the proposed model improves the consistency index by 30% and shows a marginal decrease of only 0.6% in enhancement quality index, demonstrating its superiority in underwater video enhancement tasks.

Details

1009240
Title
Enhancing Underwater Video from Consecutive Frames While Preserving Temporal Consistency
Author
Hu, Kai 1   VIAFID ORCID Logo  ; Meng, Yuancheng 2   VIAFID ORCID Logo  ; Liao, Zichen 3   VIAFID ORCID Logo  ; Tang, Lei 4   VIAFID ORCID Logo  ; Ye, Xiaoling 1   VIAFID ORCID Logo 

 School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, China; [email protected] (Y.M.); [email protected] (Z.L.); [email protected] (X.Y.); Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET), Nanjing University of Information Science and Technology, Nanjing 210044, China 
 School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, China; [email protected] (Y.M.); [email protected] (Z.L.); [email protected] (X.Y.) 
 School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, China; [email protected] (Y.M.); [email protected] (Z.L.); [email protected] (X.Y.); University of Reading, Whiteknights, P.O. Box 217, Reading, Berkshire RG6 6AH, UK 
 Information and Telecommunication Branch, State Grid Jiangsu Electric Power Company, Nanjing 211125, China; [email protected] 
Volume
13
Issue
1
First page
127
Publication year
2025
Publication date
2025
Publisher
MDPI AG
Place of publication
Basel
Country of publication
Switzerland
Publication subject
e-ISSN
20771312
Source type
Scholarly Journal
Language of publication
English
Document type
Journal Article
Publication history
 
 
Online publication date
2025-01-12
Milestone dates
2024-12-20 (Received); 2025-01-10 (Accepted)
Publication history
 
 
   First posting date
12 Jan 2025
ProQuest document ID
3159529997
Document URL
https://www.proquest.com/scholarly-journals/enhancing-underwater-video-consecutive-frames/docview/3159529997/se-2?accountid=208611
Copyright
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Last updated
2025-01-25
Database
ProQuest One Academic