Full Text

Turn on search term navigation

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Hyperspectral unmixing (HSU) is a crucial method to determine the fractional abundance of the material (endmembers) in each pixel. Most spectral unmixing methods are affected by low signal-to-noise ratios because of noisy pixels and bands simultaneously, requiring robust HSU techniques that exploit both 3D (spectral–spatial dimension) and 2D (spatial dimension) domains. In this paper, we present a new method for robust supervised HSU based on a deep hybrid (3D and 2D) convolutional autoencoder (DHCAE) network. Most HSU methods adopt the 2D model for simplicity, whereas the performance of HSU depends on spectral and spatial information. The DHCAE network exploits spectral and spatial information of the remote sensing images for abundance map estimation. In addition, DHCAE uses dropout to regularize the network for smooth learning and to avoid overfitting. Quantitative and qualitative results confirm that our proposed DHCAE network achieved better hyperspectral unmixing performance on synthetic and three real hyperspectral images, i.e., Jasper Ridge, urban and Washington DC Mall datasets.

Details

Title
DHCAE: Deep Hybrid Convolutional Autoencoder Approach for Robust Supervised Hyperspectral Unmixing
Author
Fazal Hadi; Yang, Jingxiang; Ullah, Matee; Ahmad, Irfan; Farooque, Ghulam  VIAFID ORCID Logo  ; Liang, Xiao  VIAFID ORCID Logo 
First page
4433
Publication year
2022
Publication date
2022
Publisher
MDPI AG
e-ISSN
20724292
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2716581879
Copyright
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.