Full text

Turn on search term navigation

© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Currently, deep-learning-based image dehazing methods occupy a dominant position in image dehazing applications. Although many complicated dehazing models have achieved competitive dehazing performance, effective methods for extracting useful features are still under-researched. Thus, an adaptive multi-feature attention network (AMFAN) consisting of the point-weighted attention (PWA) mechanism and the multi-layer feature fusion (AMLFF) is presented in this paper. We start by enhancing pixel-level attention for each feature map. Specifically, we design a PWA block, which aggregates global and local information of the feature map. We also employ PWA to make the model adaptively focus on significant channels/regions. Then, we design a feature fusion block (FFB), which can accomplish feature-level fusion by exploiting a PWA block. The FFB and PWA constitute our AMLFF. We design an AMLFF, which can integrate three different levels of feature maps to effectively balance the weights of the inputs to the encoder and decoder. We also utilize the contrastive loss function to train the dehazing network so that the recovered image is far from the negative sample and close to the positive sample. Experimental results on both synthetic and real-world images demonstrate that this dehazing approach surpasses numerous other advanced techniques, both visually and quantitatively, showcasing its superiority in image dehazing.

Details

Title
Adaptive Multi-Feature Attention Network for Image Dehazing
Author
Hongyuan Jing 1   VIAFID ORCID Logo  ; Chen, Jiaxing 1 ; Zhang, Chenyang 2 ; Wei, Shuang 2 ; Chen, Aidong 3 ; Zhang, Mengmeng 3 

 Beijing Key Laboratory of Information Service Engineering, College of Robotics, Beijing 100101, China; [email protected] (J.C.); [email protected] (A.C.); College of Robotics, Beijing Union University, No.4 Gongti North Road, Beijing 100027, China; [email protected] (C.Z.); [email protected] (S.W.) 
 College of Robotics, Beijing Union University, No.4 Gongti North Road, Beijing 100027, China; [email protected] (C.Z.); [email protected] (S.W.) 
 Beijing Key Laboratory of Information Service Engineering, College of Robotics, Beijing 100101, China; [email protected] (J.C.); [email protected] (A.C.); College of Robotics, Beijing Union University, No.4 Gongti North Road, Beijing 100027, China; [email protected] (C.Z.); [email protected] (S.W.); Multi-Agent System Research Centre, Beijing Union University, No. 97 Beisihuan East Road, Beijing 100101, China 
First page
3706
Publication year
2024
Publication date
2024
Publisher
MDPI AG
e-ISSN
20799292
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3110460802
Copyright
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.