1. Introduction
Hyperspectral images (HSIs) represent a type of three-dimensional data, acquired by sensors across hundreds of contiguous narrow spectral bands within the visible to infrared light range [1,2]. Due to the rich spatial information and high spectral resolution inherent in HSIs, they have been widely applied in various fields such as environmental monitoring [3], mineral exploration [4], and military security [5]. However, during the actual acquisition and transmission processes, HSIs are inevitably influenced by various types of mixed noise, including Gaussian noise and impulse noise, as well as interference factors such as deadlines and stripes. This contamination leads to a deterioration in imaging quality, which subsequently hinders the performance of HSIs in subsequent tasks such as detection [6] and recognition [7,8]. In order to improve the performance of subsequent tasks, HSI denoising has become an essential preprocessing step.
Currently, extensive research has been conducted on this issue, and various methods have been proposed to restore HSI. In the early stages of research, each band of HSI is treated as an independent grayscale image to facilitate the direct application of traditional denoising methods [9,10,11,12]. Although these methods are straightforward and easily implementable, they only consider the spatial characteristics of HSI while neglecting the correlation between spectra bands. In subsequent research, various denoising methods that take into account the spatial–spectral structure of HSI have been proposed consecutively [13,14,15,16,17,18]. In these methods, due to the inherently low-rank structure of HSIs [1], approaches based on low-rank matrix/tensor decomposition model have demonstrated superior performance. Furthermore, researchers have incorporated various prior information into this model, such as nonlocal self-similarity, to further enhance denoising performance [19,20,21,22,23]. Nonlocal self-similarity is an inherent characteristic of HSIs. The essence of utilizing this property for denoising is to reconstruct the current image patch by matching similar patches through a search window. However, due to the large spectral dimension of HSI, the block matching stage is generally time-consuming, which may not meet the rapid processing requirements of real-world HSI denoising tasks.
To address the aforementioned issues, HSI denoising methods based on nonlocal subspace representation (NSR) have garnered significant attention. These methods map HSI to a low-dimensional subspace, thereby enhancing denoising efficiency while also ensuring denoising performance [24,25,26,27,28,29,30,31]. Specifically, for a noisy HIS , we perform a low-rank decomposition to obtain an orthogonal matrix and a three-dimensional tensor . The face slices along the third mode of are called representation coefficient images (RCIs). Each RCI inherits the spatial structure of the clean HSI, which is the main reason why NSR-based methods can guarantee denoising performance. However, most of the NRS-based denoising methods focus on how to design a more effective RCIs denoiser while neglecting the optimization of RCIs itself, which also indicates that the denoising performance of HSI has the potential to be further improved.
In order to optimize RCIs, it is necessary to analyze why RCIs can inherit the spatial structure of the clean HSI. Recently, study [32] reanalyzed the subspace representation process of HSI from the perspective of principal component analysis (PCA) and explained that each RCI is actually a combination of the bands of the clean HSI. However, PCA is affected by the noise-corrupted bands when estimating RCIs, which results in RCIs not being able to inherit the spatial structure of clean HSI well. To alleviate this problem, ref. [32] developed an elastic net model based on sparse principal component analysis (SPCA) to estimate RCIs and proposed a novel HSI denoising method. However, the method proposed in [32] does not demonstrate satisfactory performance in the task of HSI mixed denoising. This is because, on the one hand, the method constructs RCIs by only considering Gaussian noise conditions, without adequately accounting for the effects of other types of noise and interference. On the other hand, the method lacks an iterative strategy for removing mixed noise.
Inspired by the aforementioned discussion, in this paper, we first propose a novel elastic network model based on SPCA to reconstruct robust RCIs under mixed noise conditions. This novel construction method mitigates the impact of mixed noise, enabling the RCIs to better inherit the spatial structure of the clean HSI. Furthermore, to further enhance denoising performance, we utilize the obtained RCIs as prior information, along with the other prior information (low-rank approximation of nonlocal similar RCI patches, see the subsequent description for details) to perform iterative denoising in the NSR-based denoiser. Different from other methods that optimize RCIs denoiser, our method gives more consideration to the construction of RCIs. We denote this novel HSI mixed denoising method via robust RCIs and nonlocal low-rank approximation as RRGNLA.
In summary, this paper presents the following three contributions:
-
To adapt to the noise condition in real HSIs, we introduce the norm into the elastic net model based on SPCA to constrain sparse noise including impulse noise, deadlines, and stripes, thereby enabling the construction of robust RCIs.
-
A mixed denoising model based on NSR is established, which utilizes the robust RCIs as prior information and takes into account nonlocal low-rank approximation. Moreover, we adopt the alternating direction method of multipliers (ADMMs) to solve the proposed RRGNLA model.
-
The experimental results indicate that the proposed RRGNLA method demonstrates competitive performance in both denoising effect and computational efficiency compared with other state-of-the-art methods. In the majority of experimental results, RRGNLA consistently achieves optimal denoising performance with high computational efficiency.
Notations: In this paper, lowercase x, uppercase X, bold uppercase X, and uppercase cursive denote scalar, vector, matrix, and tensor, respectively. For a p-order tensor , its unfolding is denoted as . The mode-j product of a tensor and a matrix is denoted as , which is equivalent to , where . The Frobenius norm and norm of X are defined as and , respectively, where is the inner product of X and itself, and denotes the element in the i-th row and j-th column of X.
The rest of this paper is organized as follows: Section 2 briefly introduces related work. In Section 3, we first discuss the elastic network model for estimating RCIs in mixed noise circumstances and provide the corresponding solution algorithm. Subsequently, we elaborate on the proposed RRGNLA model and its corresponding optimization algorithm. Section 4 presents and discusses the experimental results under both simulated and real HSI datasets. Finally, Section 5 concludes this paper.
2. Related Work
As mentioned above, in early studies, some classic visible light image denoising methods were applied to HSI denoising. For example, Rasti et al. [9] applied wavelet transform for sparse low-rank regression. Zhao et al. [10] first analyzed the low-rank characteristic of HSI, then utilized the theory of the KSVD algorithm to perform low-rank constraint and sparse approximation on HSI. Although these methods do not consider the spectral correlation of HSI, they have inspired subsequent research. Zhang et al. [13] sorted HSI by dictionary order to explore its low-rank property and removed noise using a low-rank matrix recovery method. Lu et al. [14] achieved noise-free estimation by grouping the spectral bands of HSI. Xue et al. [15] proposed a low-rank regularization method based on the spatial–spectral structure to characterize the spatial structure of HSI and perform denoising. To better preserve the spatial structure of HSI, Fan et al. [16] proposed a new tensor low-rank decomposition method to address the noise sensitivity problem during decomposition. Moreover, Huang et al. [17] also achieved notable HSI denoising performance by embedding group sparsity into low-rank tensor decomposition.
At the same time, total variation (TV) is widely utilized in tasks that involve removing heavy Gaussian noise. Fan et al. [19] proposed an HSI denoising method based on total variation (TV) regularization, which simultaneously considers the local spatial structure and the correlation among adjacent bands. Peng et al. [20] proposed an enhanced 3DTV regularization method to improve the denoising performance of HSI. In addition to TV, nonlocal self-similarity has also been widely utilized as prior information, demonstrating effective denoising performance. Sarkar et al. [21] employed the super-patch method to consider the redundancy between spatial and spectral aspects and then utilized the nuclear norm for the restoration of HSI. Xie et al. [22] designed a high-order sparsity metric and applied it to different HSI recovery tasks. Xue et al. [23] proposed a novel denoising method that leverages the spectral inter-correlation and nonlocal similarity inherent in HSI. Although utilizing nonlocal self-similarity has brought performance improvement to HSI denoising, it has, however, reduced the denoising efficiency.
Recently, NSR-based denoising methods have attracted increasing attention due to their superiority in denoising efficiency. The NSR-based denoising method usually converts the original denoising problem into the RCIs denoising problem first and then utilizes nonlocal self-similarity and different regularizations to optimize the design of the RCIs denoiser, thereby improving the computational efficiency while also ensuring the denoising performance. For instance, Zhuang et al. [24] directly utilized BM3D as the denoiser for RCIs, proposing a fast HSI denoising method. Furthermore, Zhuang et al. [25] proposed an RCIs denoiser based on successive singular value decomposition (SVD) to improve the performance. Lin et al. [26] introduced low tube rank into the RCIs denoiser to enhance HSI denoising performance.
To better adapt to the noise in real-world HSI, a lot of research progress has been made in NSR-based mixed denoising methods [33,34,35,36,37,38,39,40,41]. Sun et al. [33] applied superpixel HSI denoising, enhancing the overall denoising performance by imposing low-rank constraints on each superpixel block. Cao et al. [34] proposed a new denoising method based on nonlocal low-rank and sparse factorization. This method utilizes the norm to constrain the sparse noise component, further improving the denoising performance under mixed noise. Taking into account the common features and continuity among different bands, Zheng et al. [35] proposed a mixed denoising method based on dual-factor regularization. In addition, He et al. [36] proposed and applied high-dimensional (dimension greater than three) tensor SVD to better represent the structural correlation of HSI. This denoising method can compete with other state-of-the-art methods in terms of both denoising performance and computational efficiency.
These NSR-based mixed denoising methods demonstrate excellent performance by utilizing different regularization constraints within the denoiser. However, obtaining RCIs as a crucial step in NSR-based denoising methods has not been adequately considered for better construction. Although study [32] analyzed RCIs from the perspective of PCA and constructed new RCIs through an elastic network model based on SPCA, this elastic network model only considered the scenario of Gaussian noise, which deviates seriously from the real-world situation. In our proposed RRGNLA method, we reconstruct robust RCIs in mixed noise circumstances, which better preserve the spatial structures of the clean HSI. Unlike these NSR-based denoising methods that optimize RCIs denoisers, the proposed RRGNLA method simply incorporates a denoiser with WNNM low-rank regularization to ensure denoising efficiency. In the subsequent sections, ablation studies demonstrate the effectiveness of our constructed robust RCIs and iterative strategy in improving denoising performance. A more detailed discussion will be provided later.
3. Proposed Method
3.1. Problem Formulation
Here, let denote the three-dimensional noisy HSI data, where m and n denote the height and width of the HSI in the spatial domain, respectively, and b denotes the number of bands of the HSI in the spectra domain. , and denote three-dimensional data of the underlying clean HSI, Gaussian noise, and sparse noise (e.g., impulse noise, deadlines, and stripes), respectively. The degradation model of HSI under mixed noise can be expressed as:
(1)
According to the degradation Model (1), the denoising problem is transformed into how to better recover from . In general, the regularization model for removing mixed noise from HSI can be articulated as follows:
(2)
The first term is the data fidelity term, which can indirectly suppress Gaussian noise. The second term is a regularization term for the clean HSI. There are many different regularizers that optimize to obtain superior denoising performance. In this paper, due to the excellent performance of the weighted nuclear norm minimization (WNNM) low-rank regularizer [12] in various signal processing, we adopt it as the regularization term of , which will be explained in more detail in the following subsections. The third term is a regularization term for suppressing sparse noise. Parameters and , respectively, denote the weights of and , which are utilized to balance the importance of the two terms and can be dynamically adjusted. While the adjustment of parameters does influence the denoising performance of HSI, the selection of regularizers for different components plays a more crucial role.
3.2. Proposed Subspace Representation Model
Owing to the considerable correlation between spectral channels, HSIs inherently exhibit a low-rank structure [1]. Therefore, the clean HSI can be denoted by a subspace , where . Specifically, the latent clean HSI can be decomposed into:
(3)
where denotes model-3 tensor product, denotes the basis of subspace , E is orthogonal, i.e., with denoting the k-th order identity matrix. refers to the RCIs mentioned above. The E can be initialized using PCA (SVD) or other variants such as HySime [42]. The tensor can be obtained by projecting the noisy HSI onto the subspace, i.e., . Therefore, the NSR-based HSI denoising model can be written as:(4)
In this paper, we assess the superiority of RCIs from the following two aspects. The first aspect is the ability to characterize the spatial structure of clean HSI, which directly impacts the performance of subsequent denoising processes. The second aspect is the capability to suppress mixed noise. If the construction of the RCIs involves a significant number of bands affected by mixed noise, the denoising performance will be notably compromised. Study [32] confirmed that SPCA demonstrates a strong capability in characterizing the spatial structure of clean HSI while also effectively suppressing Gaussian noise. Therefore, we employ a model based on SPCA to construct the RCIs. We introduce a norm constrained regularization term into the model to suppress sparse noise, proposing a novel elastic network model based on SPCA to construct RCIs in mixed noise circumstances:
(5)
where denotes the sparse coefficient matrix. The first term is the data fidelity term. The second and third terms are regularization terms, constrained by the norm and Frobenius norm, respectively. These terms are designed to enhance the sparsity of S and control its robustness. The fourth term is the regularization term for the sparse noise , which employs the norm constraint to regulate its sparsity. Parameters and are applied to S and , respectively. The larger the values of these parameters, the greater the sparsity of S and respectively. At the same time, regulates the robustness of . The three variables in Model (5) can be optimized separately through the ADMM algorithm, which is equivalent to solving three subproblems:-
1.. Update , the subproblem of optimizing is:
(6)
where is the soft threshold operator [43]. -
2.. Update E, the subproblem of optimizing E is:
(7)
The Frobenius norm in Equation (7) will be expanded and solved as follows:
(8)
Given the variables S and , the first two terms of Equation (8) are fixed values. Consequently, the problem of minimizing Equation (7) can be reformulated as maximizing the third term in Equation (8). The optimization problem can be rewritten as follows:
(9)
The orthogonality of E allows the following closed form [44,45]:
(10)
where denotes the singular value decomposition, and U and V denote the left and right singular vectors of the SVD decomposition, respectively.-
3.. Update S, the subproblem of optimizing S is:
(11)
where the parameter is the scaling step size, and denotes the scaled soft threshold operator [45]:(12)
By separately solving the three subproblems, we can obtain the robust RCIs that meet our requirements. The more detailed derivation and overall solution process are summarized in Appendix A and Algorithm 1, respectively. Based on the previous description, we can derive the expression for RCIs:
(13)
Algorithm 1: Obtaining Robust RCIs via SPCA Operator |
/* |
4: update via (10)//Iterative computation of the subspace basis. |
3.3. Proposed RRGNLA Method
In previous studies, it has been demonstrated that various feature images in RCIs exhibit nonlocal self-similarity [34]. In order to achieve improved denoising performance, we utilize nonlocal self-similarity to construct similar 3D blocks from and then perform low-rank approximation on these similar 3D blocks.
Here, we introduce the construction process of self-similar 3D blocks. In order to facilitate subsequent descriptions, we articulate the construction process from a mathematical perspective. Set a search window of size , is the i-th 3D patch selected by the search window from , and is the spatial size of the 3D patch. First, we transform into a two-dimensional matrix of size . Next, using nonlocal self-similarity, we match the top s similar patches of (block matching), where s is the predetermined number of matches. Finally, the obtained similar patches are stacked into a similar block of size (stacking). We denote the above operation as , and the process of obtaining can be represented as . In order to facilitate comprehension, we also provide the flowchart of the process, as illustrated in Figure 1.
Next, due to the superior performance of the WNNM low-rank regularizer [12], in this paper, we apply it for the low-rank approximation of :
(14)
where is the result of the low-rank approximation of the i-th 3D patch in . denotes the noise variance in , which can be derived from the noise variance in the noisy HSI . For the real HSI, can be estimated using Algorithm 1, presented in [42]. It is worth mentioning that, in this paper, since we performed whitening [25] on the noisy HSI before obtaining RCIs, the value of is 1. In the subsequent discussion, we will set directly.Based on the above-mentioned analysis, we present the specific formulation of the proposed RRGLRA model as follows:
(15)
where in Model (15) is obtained through Algorithm 1, presented in this paper, serving as prior information to guide the subsequent denoising process. Simultaneously, as another prior information, aids the RRGLRA model in achieving enhanced denoising performance, which is manifested by the WNNM low-rank regularizer embedded in Model (15). Model (15) can be decoupled into the following two subproblems:(16)
(17)
The solution to the subproblem in Equation (16) has already been explained in the previous section. Here, we focus on solving the subproblem in Equation (17). We also employ the model algorithm to optimize the three variables in the Equation (17) separately.
-
1.. Update :
(18)
where is the soft threshold operator, which is the same as in Equation (6). -
2.. Update :
(19)
This is a quadratic optimization problem, which can be solved by finding where the gradient of equals zero [34]. The closed-form of Equation (19) can be guaranteed as follows by the orthogonality of E:
(20)
From Equation (20), it is evident that the process of updating includes the inverse transformation of with as prior information.
-
3.. Update E:
(21)
According to the analysis presented in [33], the solution form of Equation (21) is as follows:
(22)
where and are the left and right singular vectors obtained by performing SVD on .In summary, the algorithmic procedure and flowchart of the proposed RRGNLA method can be referred to in Figure 2 and Algorithm 2, respectively.
Algorithm 2: RRGNLA Method for HSI Mixed Denoising |
/* |
7: update via (20) //Iterative computation of the Robust RCIs. |
4. Experiments
In this section, we performed experiments on both simulated and real HSI datasets. Through both visual quality comparison and quantitative evaluation, we demonstrated the effectiveness of the proposed RRGNLA method for HSI mixed denoising. Nine methods that represent the most advanced techniques for HSI denoising were selected for comparison, i.e., BM4D [46], LRMR [13], NGMeet [27], FastHyDe [24], GLF [25], SNLRSF [34], NS3R [32], HyWTNN [36], and DTSVD [47]. The parameters involved in these methods were set based on the references and then fine-tuned to achieve optimal denoising performance for the datasets. We normalized each band of HSIs before denoising. All experiments were conducted in MATLAB R2023a with Intel® Xeon® CPU E3-1230 and 16 GB memory (Lenovo, Xi’an, China).
4.1. Simulated Data Experiments
4.1.1. Simulated Datasets
In the simulated experiments, we selected two commonly used datasets with ground truth. The first dataset utilized is the Washington DC Mall (WDC,
-
Case 1: Zero-mean Gaussian noise with different standard deviations is added to each band, and the standard deviation of each band is randomly selected in the range of [0.1–0.2].
-
Case 2: The Gaussian noise is added to each band with the same as in Case 1. In addition, impulse noise with a density of 20% is added into 20 randomly selected bands.
-
Case 3: The Gaussian noise and impulse noise are added with the same as in Case 2. In addition, deadlines with widths ranging from 1 to 3 are added into the 20 bands. Among these, 10 bands will be selected from those affected by impulse noise, while the remaining 10 bands will be randomly chosen from the other bands.
-
Case 4: The Gaussian noise, impulse noise, and deadlines are added with the same as in Case 3. In addition, we selected 20 consecutive bands to add random stripes, ensuring that 10% of the columns in each band were contaminated.
4.1.2. Visual Quality Comparison
Here, we select representative denoising results to demonstrate the effectiveness of the proposed RRGNLA method. For the WDC dataset, Figure 3 and Figure 4 illustrate the denoised images of band 9 (in Case 3) and band 82 (in Case 4) by different denoising methods. At the same time, in order to facilitate a more effective visual quality comparison, we have highlighted the same subregion with red boxes in Figure 3 and Figure 4. Similarly, for the PaU dataset, Figure 5 and Figure 6 present the denoised images of band 17 (in Case 3) and band 88 (in Case 4) using different denoising methods.
From Figure 3, Figure 4, Figure 5 and Figure 6, it can be observed that our proposed RRGNLA method demonstrates effective denoising performance on both the WDC dataset and the PaU dataset. At the same time, we also observed that classic denoising methods, namely, BM4D and LRMR, show less efficacy in removing mixed noise. Although denoising methods based on NSR, such as GLF and SNLRSF, demonstrate good denoising results, our proposed RRGNLA method is able to recover the details of the hyperspectral images more effectively. This can be observed in the magnified sub-images in Figure 3, Figure 4, Figure 5 and Figure 6. Due to the lack of construction for RCIs in mixed noise circumstances, the denoising results of NS3R in our experiments were unsatisfactory. This also indicates that the robust RCIs we constructed played a role in removing mixed noise.
4.1.3. Quantitative Comparison
Five commonly used quantitative metrics are employed to evaluate the performance of various denoising methods. These include the mean peak signal-to-noise ratio (MPSNR), the mean structural similarity index (MSSIM) [48], and the mean feature similarity index (MFSIM) [49], all of which indicate better denoising results with higher values. Conversely, two metrics, namely, the erreur relative globale adimensionnelle de synthese (ERGAS) [50] and spectral angle mean (SAM) [51], signify improved denoising results with lower values. Meanwhile, in order to compare the denoising efficiency of different methods, we also present the computation time (in seconds). Table 1 and Table 2, respectively, summarize the quantitative evaluation results of different denoising methods in Cases 1–4 for the WDC dataset and the PaU dataset. Best and second results are highlighted in bold and underline, respectively.
From Table 1 and Table 2, it can be observed that the proposed RRGNLA method demonstrates optimal performance across most metrics, further underscoring the advanced capability of RRGNLA in effectively removing mixed noise. The proposed RRGNLA method also demonstrates competitive performance in terms of denoising efficiency. In the experiments conducted on two simulated datasets, its computation time consistently ranks second. In all simulated experiments, FastHyDe has the shortest computation time; however, its denoising performance is only average. For example, in Case 4 of the WDC dataset, while FastHyDe is more efficient in terms of computation time than RRGNLA, its MPSNR value is nearly 4 dB lower than that of RRGNLA. The NS3R employs refinement in the denoising process, causing a slightly longer computation time than RRGNLA. The SNLRSF outperforms RRGNLA in several metrics on the PaU dataset. However, SNLRSF exhibits poor performance in terms of denoising efficiency. For example, in Case 4 of the PaU dataset, the computation time of RRGNLA is nearly 40 times faster than that of SNLRSF. Although the HyWTNN and the DTSVD demonstrate competitive denoising performance on the PaC dataset compared to the proposed RRGNLA method, their performance on the WDC dataset is only average. This further demonstrates the strong adaptability of the proposed RRGNLA method across different datasets. To more intuitively compare the denoising results of different methods, we have also plotted the PSNR and SSIM values for the WDC dataset in Case 3 and the PaU dataset in Case 4, as shown in Figure 7.
4.2. Real Data Experiments
4.2.1. Real HSI Datasets
Denoising real HSI datasets presents a greater challenge due to the complexity of the noise circumstance. Here, we have selected two commonly used real HSI datasets, including the Indian Pines dataset (India,
4.2.2. Results Comparison
Here, we present the denoising results of the Indian and Urban datasets under different methods, as shown in Figure 8 and Figure 9 respectively. To better visualize the denoising results of the Indian dataset, we present a false color image composed of three band combinations (R:108, B:140, G:220) in Figure 8a. From Figure 8 and Figure 9, it can be readily observed that our proposed method not only demonstrates superior visual quality, but also retains more details compared to the other methods under comparison.
In addition, we have plotted the spectral signature curves of specific pixels under different denoising methods to compare their denoising performance, as illustrated in Figure 10 and Figure 11. It is not difficult to observe from Figure 10 and Figure 11 that the RRGNLA method we proposed is capable of effectively estimating the spectral signature values under both real HSI datasets. Finally, we present the computation times of different denoising methods on two real HSI datasets in Table 3. In two real HSI datasets, the RRGNLA method we proposed continues to demonstrate competitive denoising efficiency.
4.3. Discussion
4.3.1. Parameters Setting
In this section, we analyze the parameter values required for conducting experiments with both simulated and real HSI datasets. In all experiments conducted to obtain robust RCIs, we set parameters and to 1 × 10−6 and 0.1, respectively, while the value of parameter was selected from the set {1 × 10−5, 2 × 10−5, 3 × 10−5, 4 × 10−5, 5 × 10−5}. In the low-rank approximation process, to enhance the computational efficiency of our proposed RRGNLA method, we select smaller 3D patch sizes and fewer similar patches. In all experiments, the values for l and s were set to 4 and 150, respectively. During the iterative denoising stage, we select and from the sets {0.05, 0.1, 0.2, 0.4, 0.6, 0.8} and {0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4}, respectively, to adapt to different datasets and achieve better denoising results. The selection of the subspace dimension k is crucial to the denoising results. In simulated HSI dataset experiments, we select k from the set {3, 4, 5, 6, 7, 8, 9, 10}. The heavier the noise, the smaller the k we choose. In real HSI dataset experiments, we estimate the k using the HySime algorithm and subsequently fine-tune it.
4.3.2. Ablation Study
(1) Parameter sensitivity analysis: Here, we analyze the adjustable parameters in the proposed model involved in the RRGNLA, including the regularization parameter and subspace dimension k during the stage in which robust RCIs are obtained, as well as the regularization parameters and in the RCI denoising stage. We conducted tests on the PaU dataset and plotted the changes in the MPSNR values under different parameters, as shown in Figure 12. For the regularization parameter , the MPSNR values show little variation across all four cases for different values, indicating that the proposed RRGNLA method is not sensitive to this parameter. The optimal subspace dimension k (where the MPSNR value is maximized) decreases as the noise level increases. As shown in Figure 12b, in Case 1, the optimal subspace dimension is 4, whereas in Case 4, it is 3. For parameters and , we plot the 2D surface in Case 4, as shown in Figure 12c. The grid search over the two parameters intuitively reveals their impact on denoising performance.
(2) Comparison of Different Methods for Obtaining RCIs: We compare three different methods for obtaining RCIs, that is, PCA (or SVD), SPCA in [32], and our method. Specifically, we denoise the RCIs obtained from the three methods using the WNNM low-rank regularizer. To ensure a fair comparison in our experiments, the subspace dimension k for all three methods was set to 5. Table 4 presents the quantitative evaluation results for the WDC and PaU datasets in Case 4. From Table 4, it can be observed that the RCIs constructed using our method exhibit strong robustness to mixed noise. Due to the lack of consideration for sparse noise, the RCIs obtained through the SPCA in [32] demonstrate average performance in mixed denoising. For instance, in the PaU dataset, its denoising performance even underperforms the RCIs constructed using PCA, as indicated by the MSISM and SAM metrics.
(3) Effectiveness of Robust RCIs and iterative denoising strategy: In this section, we compare the proposed RRGNLA method with three other methods: (1) without the robust RCIs guidance, denoted as NLA; (2) without iterative denoising strategy, denoted as RRG; (3) without norm constraints for sparse noise during the RCIs denoising stage, denoted as RRGN. Table 5 presents the comparison results of three different metrics on the WDC and PaU datasets. We can observe that the proposed RRGNLA method achieves the best quantitative results, indicating the effectiveness of robust RCIs guidance and iterative denoising strategy in enhancing denoising performance.
4.3.3. Comparison with Deep Learning Methods
Recently, deep learning-based HSI denoising methods have garnered increased attention. Here, we additionally compare the proposed RRGNLA method with three state-of-the-art deep learning-based methods, including SDeCNN [52], FastHyMix [53], and MAC-Net [54]. The quantitative comparison results are shown in Table 6. From Table 6, it can be observed that the proposed RRGNLA method is competitive with deep learning-based methods. As the noise level increases, the proposed RRGNLA method exhibits greater robustness in denoising performance.
4.3.4. Complexity and Convergence Analysis
In this section, we briefly analyze the computational complexity and convergence of the proposed RRGNLA method. Let m, n, and b represent the three dimensions of an HSI, and let k denote the subspace dimension. To simplify the description, we directly utilize the two-dimensional matrix form of the HSI data to analyze computational complexity. During the robust RCIs obtaining stage (see Algorithm 1), the computational complexity of solving Model (5) is determined using Equation (6). In Equation (6), the computational complexity of the three-dimensional tensor is , while the computational complexity of the soft-thresholding operator is . Since k is much smaller than b, the overall computational complexity of the robust RCI obtaining stage is .
During the RCIs denoising stage (see Algorithm 2), the computational complexity is mainly concentrated in the WNNM low-rank regularizer (determined using block matching and low-rank approximation) and the solution of Model (17).
-
1.. Block Matching: Let denote the search window size; denotes the spatial size of the 3D patch. Calculating the similarity patch within a search window requires .
-
2.. Low-Rank Approximation: The computational complexity of the low-rank approximation is determined using the SVD step. For a low-rank matrix of size , where s is the number of similar patches, the computational complexity of the SVD step is .
-
3.. Solving Model (17): The computational complexity of solving Model (17) is determined using Equation (18). In Equation (18), the computational complexity for matrix operations is , and performing the soft-thresholding operation requires . We disregard the impact of k; therefore, the overall complexity of the iterative denoising process is .
In summary, for an HSI of size , the overall computational complexity of the RCI denoising stage is .
Additionally, to validate the convergence of the proposed RRGNLA method, we provide the convergence curves for different metrics on the WDC and PaU datasets in Case 3 and Case 4, as illustrated in Figure 13 and Figure 14, respectively. As the number of iterations increases, the changes in the values of MPSNR, MSSIM, and SAM tend to approach zero. This clearly demonstrates the convergence of the proposed RRGNLA method.
5. Conclusions
In this paper, we present a novel method based on NSR for the removal of mixed noise in HSI. The advanced denoising performance of this method is due to two main factors: (1) By utilizing an SPCA-based elastic net model to construct RCIs, the obtained RCIs better inherit the spatial structure of clean HSI. (2) The robust RCIs obtained are utilized as prior information for iterative denoising. Extensive comparative experiments and ablation studies demonstrate the effectiveness of constructing robust RCIs and employing them as prior information in an iterative denoiser for removing mixed noise from HSI. In the future, we will explore how to incorporate smooth prior information into the construction of robust RCIs to further enhance denoising performance.
Conceptualization and methodology, J.S.; software, C.W.; validation, J.S.; formal analysis and investigation, F.H. and C.L.; resources, C.W.; data curation, Z.Y.; writing—original draft preparation, J.S.; writing—review and editing, J.S.; visualization, F.H.; supervision, B.G.; project administration and funding acquisition, B.G. All authors have read and agreed to the published version of the manuscript.
The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.
We thank the editor and anonymous reviewers for their suggestions and comments, which have helped us to improve the quality of our work.
The authors declare no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. Flowchart of constructing similar 3D blocks in this paper, where the yellow dashed square is the search window, the red square is the 3D patch, and the other color squares are similar patches.
Figure 3. Denoised images by different methods on the band 9 of the WDC dataset in Case 3. (a) Original, (b) noisy, (c) BM4D, (d) LRMR, (e) NGMeet, (f) FastHyDe, (g) GLF, (h) SNLRSF, (i) NS3R, (j) HyWTNN, (k) DTSVD, (l) RRGNLA.
Figure 4. Denoised images using different methods on the band 82 of the WDC dataset in Case 4. (a) Original, (b) noisy, (c) BM4D, (d) LRMR, (e) NGMeet, (f) FastHyDe, (g) GLF, (h) SNLRSF, (i) NS3R, (j) HyWTNN, (k) DTSVD, (l) RRGNLA.
Figure 5. Denoised images using different methods on the band 17 of the PaU dataset in Case 3. (a) Original, (b) noisy, (c) BM4D, (d) LRMR, (e) NGMeet, (f) FastHyDe, (g) GLF, (h) SNLRSF, (i) NS3R, (j) HyWTNN, (k) DTSVD, (l) RRGNLA.
Figure 6. Denoised images using different methods on the band 88 of the PaU dataset in Case 4. (a) Original, (b) noisy, (c) BM4D, (d) LRMR, (e) NGMeet, (f) FastHyDe, (g) GLF, (h) SNLRSF, (i) NS3R, (j) HyWTNN, (k) DTSVD, (l) RRGNLA.
Figure 7. PSNR and SSIM values of each band for the WDC and the PaU datasets. (a,b) The WDC dataset in Case 3. (c,d) The PaU dataset in Case 4.
Figure 8. Denoising results for Indian Pines dataset: (a) original false color image, (b) BM4D, (c) LRMR, (d) NGMeet, (e) FastHyDe, (f) GLF, (g) original false color image, (h) SNLRSF, (i) NS3R, (j) HyWTNN, (k) DTSVD, (l) RRGNLA.
Figure 9. Denoising results for Urban dataset: (a) original, (b) BM4D, (c) LRMR, (d) NGMeet, (e) FastHyDe, (f) GLF, (g) original, (h) SNLRSF, (i) NS3R, (j) HyWTNN, (k) DTSVD, (l) RRGNLA.
Figure 10. Spectral signature curves on pixel point (10, 90) of the Indian dataset: (a) original, (b) BM4D, (c) LRMR, (d) NGMeet, (e) FastHyDe, (f) GLF, (g) original, (h) SNLRSF, (i) NS3R, (j) HyWTNN, (k) DTSVD, (l) RRGNLA.
Figure 11. Spectral signature curves on pixel point (10, 90) of the Urban dataset: (a) original, (b) BM4D, (c) LRMR, (d) NGMeet, (e) FastHyDe, (f) GLF, (g) original, (h) SNLRSF, (i) NS3R, (j) HyWTNN, (k) DTSVD, (l) RRGNLA.
Figure 12. Sensitivity analysis of different parameters on the PaU dataset: (a) MPSNR versus regularization parameter [Forumla omitted. See PDF.], (b) MPSNR versus number of subspace k, (c) MPSNR versus regularization parameters [Forumla omitted. See PDF.] and [Forumla omitted. See PDF.] in Case 4.
Figure 13. Convergence analysis of three metrics on the WDC dataset: (a) MPSNR, (b) MSSIM, (c) SAM.
Figure 14. Convergence analysis of three metrics on the PaU dataset: (a) MPSNR, (b) MSSIM, (c) SAM.
Quantitative comparison results using different methods with the simulated Cases 1–4 on the WDC datasets.
Data | Case | Index | Noisy | BM4D | LRMR | NG-Meet | Fast- | GLF | SNL- | NS3R | HyW-TNN | DT- | RRG- |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
WDC | 1 | MPSNR | 16.35 | 30.19 | 31.34 | 35.42 | 35.69 | 36.57 | 36.55 | 35.31 | 36.24 | 36.41 | 36.66 |
MSSIM | 0.265 | 0.847 | 0.889 | 0.958 | 0.957 | 0.965 | 0.966 | 0.961 | 0.963 | 0.965 | 0.967 | ||
MFSIM | 0.619 | 0.910 | 0.943 | 0.974 | 0.974 | 0.979 | 0.979 | 0.975 | 0.9778 | 0.979 | 0.980 | ||
ERGAS | 612.87 | 118.18 | 103.55 | 66.47 | 62.39 | 56.78 | 57.01 | 66.69 | 59.02 | 57.94 | 56.32 | ||
SAM | 35.577 | 6.641 | 6.922 | 3.791 | 3.542 | 3.166 | 3.223 | 3.500 | 3.356 | 3.266 | 3.191 | ||
Time(s) | - | 579.10 | 221.37 | 94.83 | 9.97 | 1046.41 | 796.05 | 38.28 | 38.67 | 36.10 | 35.07 | ||
2 | MPSNR | 15.77 | 29.10 | 30.17 | 32.43 | 33.88 | 34.92 | 36.14 | 32.35 | 35.87 | 35.76 | 36.32 | |
MSSIM | 0.244 | 0.812 | 0.859 | 0.911 | 0.935 | 0.945 | 0.963 | 0.913 | 0.961 | 0.958 | 0.965 | ||
MFSIM | 0.605 | 0.897 | 0.931 | 0.952 | 0.967 | 0.973 | 0.977 | 0.952 | 0.976 | 0.975 | 0.979 | ||
ERGAS | 686.98 | 149.77 | 151.94 | 172.92 | 133.19 | 129.44 | 59.87 | 175.62 | 61.89 | 64.54 | 58.62 | ||
SAM | 37.812 | 9.119 | 10.148 | 10.281 | 8.504 | 8.343 | 3.390 | 10.330 | 3.495 | 3.615 | 3.314 | ||
Time(s) | - | 587.23 | 228.44 | 105.68 | 9.33 | 1129.33 | 805.21 | 39.37 | 37.41 | 35.33 | 34.66 | ||
3 | MPSNR | 15.61 | 28.19 | 29.67 | 31.94 | 32.86 | 34.13 | 35.84 | 31.72 | 35.09 | 35.26 | 36.06 | |
MSSIM | 0.243 | 0.801 | 0.853 | 0.913 | 0.919 | 0.936 | 0.961 | 0.904 | 0.959 | 0.952 | 0.963 | ||
MFSIM | 0.601 | 0.888 | 0.925 | 0.952 | 0.958 | 0.968 | 0.976 | 0.947 | 0.974 | 0.972 | 0.978 | ||
ERGAS | 706.86 | 227.97 | 179.58 | 182.52 | 165.45 | 154.32 | 62.07 | 213.97 | 67.74 | 89.01 | 60.44 | ||
SAM | 38.269 | 13.032 | 11.647 | 10.983 | 9.998 | 9.586 | 3.534 | 12.283 | 3.685 | 4.339 | 3.449 | ||
Time(s) | - | 593.59 | 225.44 | 95.86 | 8.72 | 1044.09 | 848.29 | 47.05 | 36.96 | 35.73 | 35.24 | ||
4 | MPSNR | 15.26 | 27.08 | 28.93 | 31.05 | 31.36 | 33.26 | 34.76 | 31.09 | 34.73 | 34.69 | 35.04 | |
MSSIM | 0.233 | 0.769 | 0.839 | 0.898 | 0.888 | 0.922 | 0.956 | 0.894 | 0.955 | 0.945 | 0.960 | ||
MFSIM | 0.594 | 0.872 | 0.916 | 0.944 | 0.942 | 0.960 | 0.972 | 0.942 | 0.972 | 0.968 | 0.975 | ||
ERGAS | 726.73 | 282.70 | 191.22 | 178.64 | 175.884 | 159.303 | 70.413 | 190.590 | 70.809 | 86.587 | 68.391 | ||
SAM | 39.671 | 19.112 | 13.132 | 11.516 | 11.573 | 10.150 | 3.918 | 12.858 | 3.949 | 4.782 | 3.783 | ||
Time(s) | - | 581.96 | 230.53 | 96.34 | 8.39 | 1017.04 | 844.06 | 42.67 | 38.55 | 36.71 | 34.41 |
Best and second best results on each line are highlighted in bold and underline, respectively.
Quantitative comparison results using different methods with the simulated Cases 1–4 on the PaU datasets.
Data | Case | Index | Noisy | BM4D | LRMR | NG-Meet | Fast- | GLF | SNL- | NS3R | HyW-TNN | DT- | RRG- |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
PaU | 1 | MPSNR | 16.52 | 32.00 | 29.89 | 34.77 | 34.97 | 35.91 | 35.82 | 35.34 | 35.28 | 35.49 | 36.56 |
MSSIM | 0.202 | 0.867 | 0.772 | 0.915 | 0.930 | 0.939 | 0.936 | 0.936 | 0.931 | 0.932 | 0.946 | ||
MFSIM | 0.556 | 0.916 | 0.908 | 0.959 | 0.960 | 0.966 | 0.967 | 0.965 | 0.963 | 0.964 | 0.971 | ||
ERGAS | 612.03 | 99.43 | 130.57 | 81.37 | 72.31 | 65.27 | 66.11 | 70.89 | 69.94 | 68.07 | 60.93 | ||
SAM | 41.323 | 6.992 | 11.141 | 6.877 | 5.136 | 4.733 | 4.764 | 5.246 | 5.006 | 4.841 | 4.404 | ||
Time(s) | - | 331.90 | 201.25 | 138.38 | 6.42 | 1236.6 | 1057.7 | 44.09 | 43.40 | 36.91 | 32.64 | ||
2 | MPSNR | 15.331 | 29.77 | 28.12 | 30.11 | 31.59 | 32.93 | 34.59 | 30.08 | 34.64 | 33.91 | 35.39 | |
MSSIM | 0.173 | 0.795 | 0.711 | 0.819 | 0.884 | 0.904 | 0.923 | 0.827 | 0.925 | 0.916 | 0.937 | ||
MFSIM | 0.529 | 0.887 | 0.880 | 0.916 | 0.941 | 0.954 | 0.960 | 0.920 | 0.958 | 0.955 | 0.966 | ||
ERGAS | 754.73 | 149.88 | 213.34 | 223.66 | 168.57 | 158.78 | 77.01 | 220.89 | 75.31 | 86.78 | 69.91 | ||
SAM | 44.518 | 11.010 | 15.688 | 15.368 | 12.201 | 11.858 | 5.197 | 14.926 | 5.242 | 6.103 | 5.114 | ||
Time(s) | - | 333.93 | 206.87 | 136.16 | 9.20 | 1260.59 | 1059.9 | 47.52 | 42.26 | 35.39 | 34.22 | ||
3 | MPSNR | 15.076 | 28.02 | 27.43 | 29.46 | 30.63 | 31.99 | 34.00 | 29.13 | 34.36 | 33.36 | 34.98 | |
MSSIM | 0.171 | 0.772 | 0.703 | 0.834 | 0.860 | 0.894 | 0.918 | 0.823 | 0.921 | 0.911 | 0.934 | ||
MFSIM | 0.524 | 0.871 | 0.870 | 0.918 | 0.930 | 0.949 | 0.958 | 0.915 | 0.957 | 0.952 | 0.964 | ||
ERGAS | 780.03 | 257.54 | 246.79 | 236.86 | 196.70 | 182.48 | 83.03 | 251.63 | 80.35 | 90.69 | 72.81 | ||
SAM | 44.940 | 17.148 | 17.552 | 15.693 | 13.626 | 13.049 | 5.535 | 16.457 | 5.424 | 6.203 | 5.252 | ||
Time(s) | - | 329.52 | 207.03 | 137.96 | 9.44 | 1221.31 | 1065.4 | 45.35 | 43.06 | 35.61 | 33.03 | ||
4 | MPSNR | 14.40 | 25.42 | 26.01 | 28.07 | 28.16 | 31.53 | 33.37 | 27.79 | 33.58 | 32.71 | 34.34 | |
MSSIM | 0.161 | 0.693 | 0.667 | 0.800 | 0.748 | 0.886 | 0.907 | 0.777 | 0.905 | 0.897 | 0.924 | ||
MFSIM | 0.510 | 0.831 | 0.848 | 0.905 | 0.887 | 0.946 | 0.954 | 0.897 | 0.949 | 0.947 | 0.959 | ||
ERGAS | 827.45 | 376.33 | 270.67 | 240.76 | 238.75 | 182.21 | 89.39 | 264.99 | 84.34 | 96.51 | 78.32 | ||
SAM | 47.039 | 27.129 | 19.849 | 16.592 | 17.083 | 13.058 | 5.908 | 18.136 | 5.907 | 6.435 | 5.505 | ||
Time(s) | - | 334.42 | 201.34 | 135.09 | 9.74 | 1261.73 | 1049.35 | 47.81 | 46.72 | 33.46 | 32.79 |
Best and second best results on each line are highlighted in bold and underline, respectively.
Computation times of different denoising methods on two real HSI datasets.
Data | Index | BM4D | LRMR | NG-Meet | Fast- | GLF | SNLRSF | NS3R | HyW-TNN | DT- | RRG- |
---|---|---|---|---|---|---|---|---|---|---|---|
Indian | Time(s) | 163.77 | 78.52 | 50.09 | 3.38 | 334.81 | 269.18 | 9.32 | 13.74 | 11.17 | 15.56 |
Urban | Time(s) | 732.08 | 411.41 | 149.49 | 14.42 | 1755.66 | 1193.03 | 39.97 | 53.14 | 42.19 | 49.53 |
Comparison of denoising performance of RCIs constructed using different methods.
Data | Index | Noisy | PCA | SPCA in [ | Ours |
---|---|---|---|---|---|
WDC | MPSNR | 15.26 | 23.04 | 31.06 | 31.87 |
MSSIM | 0.233 | 0.817 | 0.896 | 0.910 | |
SAM | 39.671 | 26.251 | 12.412 | 9.418 | |
PaU | MPSNR | 14.40 | 27.06 | 27.51 | 28.87 |
MSSIM | 0.161 | 0.779 | 0.768 | 0.804 | |
SAM | 47.039 | 18.391 | 18.586 | 17.046 |
Best and second best results on each line are highlighted in bold and underline, respectively.
Quantitative comparison results of different methods on two simulated datasets.
Data | Case | Index | NLA | RRG | RRGN | RRGNLA |
---|---|---|---|---|---|---|
WDC | 3 | MPSNR | 35.62 | 32.66 | 34.40 | 36.06 |
MSSIM | 0.957 | 0.929 | 0.942 | 0.963 | ||
SAM | 4.149 | 7.228 | 9.237 | 3.449 | ||
4 | MPSNR | 34.55 | 32.10 | 33.22 | 35.04 | |
MSSIM | 0.950 | 0.918 | 0.930 | 0.960 | ||
SAM | 4.621 | 7.788 | 9.893 | 3.783 | ||
PaU | 3 | MPSNR | 34.45 | 30.13 | 32.63 | 34.98 |
MSSIM | 0.927 | 0.865 | 0.907 | 0.934 | ||
SAM | 5.587 | 11.487 | 10.736 | 5.252 | ||
4 | MPSNR | 33.72 | 29.41 | 31.99 | 34.34 | |
MSSIM | 0.916 | 0.831 | 0.897 | 0.924 | ||
SAM | 6.009 | 12.308 | 11.037 | 5.505 |
Best result on each line is highlighted in bold.
Quantitative comparison results of the proposed RRGNLA method and three deep learning-based methods on the WDC dataset.
Data | Case | Index | SDeCNN | FastHyMix | MAC-Net | RRGNLA |
---|---|---|---|---|---|---|
WDC | 1 | MPSNR | 31.06 | 36.06 | 36.48 | 36.66 |
MSSIM | 0.883 | 0.960 | 0.964 | 0.967 | ||
SAM | 7.051 | 3.429 | 3.210 | 3.191 | ||
2 | MPSNR | 29.16 | 34.59 | 35.97 | 36.32 | |
MSSIM | 0.832 | 0.943 | 0.961 | 0.965 | ||
SAM | 10.863 | 8.078 | 3.423 | 3.314 | ||
3 | MPSNR | 28.44 | 33.98 | 35.67 | 36.06 | |
MSSIM | 0.819 | 0.936 | 0.960 | 0.963 | ||
SAM | 12.716 | 9.259 | 3.492 | 3.449 | ||
4 | MPSNR | 27.77 | 33.18 | 34.84 | 35.04 | |
MSSIM | 0.799 | 0.934 | 0.957 | 0.960 | ||
SAM | 14.338 | 9.267 | 3.824 | 3.783 |
Best result on each line is highlighted in bold.
Appendix A
In
For
For Equation (7), we rewrite the Frobenius norm expansion as follows:
Given the variables S and
According to Appendix B in study [
For Equation (11), the proximal gradient method [
References
1. Ghamisi, P.; Yokoya, N.; Li, J.; Liao, W.; Liu, S.; Plaza, J.; Rasti, B.; Plaza, A. Advances in hyperspectral image and signal processing: A comprehensive overview of the state of the art. IEEE Geosci. Remote Sens. Mag.; 2017; 5, pp. 37-78. [DOI: https://dx.doi.org/10.1109/MGRS.2017.2762087]
2. Peng, J.; Sun, W.; Li, H.-C.; Li, W.; Meng, X.; Ge, C.; Du, Q. Low-rank and sparse representation for hyperspectral image processing: A review. IEEE Geosci. Remote Sens. Mag.; 2021; 10, pp. 10-43. [DOI: https://dx.doi.org/10.1109/MGRS.2021.3075491]
3. Stuart, M.B.; McGonigle, A.J.; Willmott, J.R. Hyperspectral imaging in environmental monitoring: A review of recent developments and technological advances in compact field deployable systems. Sensors; 2019; 19, 3071. [DOI: https://dx.doi.org/10.3390/s19143071] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31336796]
4. Goetz, A.F. Three decades of hyperspectral remote sensing of the Earth: A personal view. Remote Sens. Environ.; 2009; 113, pp. S5-S16. [DOI: https://dx.doi.org/10.1016/j.rse.2007.12.014]
5. Shimoni, M.; Haelterman, R.; Perneel, C. Hyperspectral imaging for military and security applications: Combining myriad processing and sensing techniques. IEEE Geosci. Remote Sens. Mag.; 2019; 7, pp. 101-117. [DOI: https://dx.doi.org/10.1109/MGRS.2019.2902525]
6. Zhang, Y.; Du, B.; Zhang, L.; Liu, T. Joint sparse representation and multitask learning for hyperspectral target detection. IEEE Trans. Geosci. Remote Sens.; 2016; 55, pp. 894-906. [DOI: https://dx.doi.org/10.1109/TGRS.2016.2616649]
7. Zeng, S.; Wang, Z.; Gao, C.; Kang, Z.; Feng, D. Hyperspectral image classification with global–local discriminant analysis and spatial–spectral context. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.; 2018; 11, pp. 5005-5018. [DOI: https://dx.doi.org/10.1109/JSTARS.2018.2878336]
8. Cao, X.; Yao, J.; Xu, Z.; Meng, D. Hyperspectral image classification with convolutional neural network and active learning. IEEE Trans. Geosci. Remote Sens.; 2020; 58, pp. 4604-4616. [DOI: https://dx.doi.org/10.1109/TGRS.2020.2964627]
9. Rasti, B.; Sveinsson, J.R.; Ulfarsson, M.O. Wavelet-based sparse reduced-rank regression for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens.; 2014; 52, pp. 6688-6698. [DOI: https://dx.doi.org/10.1109/TGRS.2014.2301415]
10. Zhao, Y.-Q.; Yang, J. Hyperspectral image denoising via sparse representation and low-rank constraint. IEEE Trans. Geosci. Remote Sens.; 2014; 53, pp. 296-308. [DOI: https://dx.doi.org/10.1109/TGRS.2014.2321557]
11. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process.; 2007; 16, pp. 2080-2095. [DOI: https://dx.doi.org/10.1109/TIP.2007.901238] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/17688213]
12. Gu, S.; Zhang, L.; Zuo, W.; Feng, X. Weighted Nuclear Norm Minimization with Application to Image Denoising. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); Columbus, OH, USA, 24–27 June 2014; pp. 2862-2869.
13. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens.; 2013; 52, pp. 4729-4743. [DOI: https://dx.doi.org/10.1109/TGRS.2013.2284280]
14. Lu, T.; Li, S.; Fang, L.; Ma, Y.; Benediktsson, J.A. Spectral–spatial adaptive sparse representation for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens.; 2015; 54, pp. 373-385. [DOI: https://dx.doi.org/10.1109/TGRS.2015.2457614]
15. Xue, J.; Zhao, Y.; Liao, W.; Kong, S.G. Joint spatial and spectral low-rank regularization for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens.; 2017; 56, pp. 1940-1958. [DOI: https://dx.doi.org/10.1109/TGRS.2017.2771155]
16. Fan, H.; Chen, Y.; Guo, Y.; Zhang, H.; Kuang, G. Hyperspectral image restoration using low-rank tensor recovery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.; 2017; 10, pp. 4589-4604. [DOI: https://dx.doi.org/10.1109/JSTARS.2017.2714338]
17. Huang, Z.; Li, S.; Fang, L.; Li, H.; Benediktsson, J.A. Hyperspectral image denoising with group sparse and low-rank tensor decomposition. IEEE Access; 2017; 6, pp. 1380-1390. [DOI: https://dx.doi.org/10.1109/ACCESS.2017.2778947]
18. Xue, J.; Zhao, Y.; Huang, S.; Liao, W.; Chan, J.C.-W.; Kong, S.G. Multilayer sparsity-based tensor decomposition for low-rank tensor completion. IEEE Trans. Neural Netw. Learn. Syst.; 2021; 33, pp. 6916-6930. [DOI: https://dx.doi.org/10.1109/TNNLS.2021.3083931]
19. Fan, H.; Li, C.; Guo, Y.; Kuang, G.; Ma, J. Spatial–spectral total variation regularized low-rank tensor decomposition for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens.; 2018; 56, pp. 6196-6213. [DOI: https://dx.doi.org/10.1109/TGRS.2018.2833473]
20. Peng, J.; Xie, Q.; Zhao, Q.; Wang, Y.; Yee, L.; Meng, D. Enhanced 3DTV regularization and its applications on HSI denoising and compressed sensing. IEEE Trans. Image Process.; 2020; 29, pp. 7889-7903. [DOI: https://dx.doi.org/10.1109/TIP.2020.3007840]
21. Sarkar, S.; Sahay, R.R. A non-local superpatch-based algorithm exploiting low rank prior for restoration of hyperspectral images. IEEE Trans. Geosci. Remote Sens.; 2021; 30, pp. 6335-6348. [DOI: https://dx.doi.org/10.1109/TIP.2021.3093780]
22. Xie, Q.; Zhao, Q.; Meng, D.; Xu, Z. Kronecker-basis-representation based tensor sparsity and its applications to tensor recovery. IEEE Trans. Pattern Anal. Mach. Intell.; 2017; 40, pp. 1888-1902. [DOI: https://dx.doi.org/10.1109/TPAMI.2017.2734888] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28783623]
23. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.-W. Nonlocal low-rank regularized tensor decomposition for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens.; 2019; 57, pp. 5174-5189. [DOI: https://dx.doi.org/10.1109/TGRS.2019.2897316]
24. Zhuang, L.; Bioucas-Dias, J.M. Fast hyperspectral image denoising and inpainting based on low-rank and sparse representations. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.; 2018; 11, pp. 730-742. [DOI: https://dx.doi.org/10.1109/JSTARS.2018.2796570]
25. Zhuang, L.; Fu, X.; Ng, M.K.; Bioucas-Dias, J.M. Hyperspectral image denoising based on global and nonlocal low-rank factorizations. IEEE Trans. Geosci. Remote Sens.; 2021; 59, pp. 10438-10454. [DOI: https://dx.doi.org/10.1109/TGRS.2020.3046038]
26. Lin, J.; Huang, T.-Z.; Zhao, X.-L.; Jiang, T.-X.; Zhuang, L. A tensor subspace representation-based method for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens.; 2020; 59, pp. 7739-7757. [DOI: https://dx.doi.org/10.1109/TGRS.2020.3032168]
27. He, W.; Yao, Q.; Li, C.; Yokoya, N.; Zhao, Q.; Zhang, H.; Zhang, L. Non-local meets global: An iterative paradigm for hyperspectral image restoration. IEEE Trans. Pattern Anal. Mach. Intell.; 2020; 44, pp. 2089-2107. [DOI: https://dx.doi.org/10.1109/TPAMI.2020.3027563]
28. Xu, S.; Cao, X.; Peng, J.; Ke, Q.; Ma, C.; Meng, D. Hyperspectral image denoising by asymmetric noise modeling. IEEE Trans. Geosci. Remote Sens.; 2022; 60, 5545214. [DOI: https://dx.doi.org/10.1109/TGRS.2022.3227735]
29. Su, X.; Zhang, Z.; Yang, F. Fast hyperspectral image denoising and destriping method based on graph Laplacian regularization. IEEE Trans. Geosci. Remote Sens.; 2023; 61, 5511214. [DOI: https://dx.doi.org/10.1109/TGRS.2023.3272906]
30. Chen, Y.; Zeng, J.; He, W.; Zhao, X.-L.; Jiang, T.-X.; Huang, Q. Fast Large-Scale Hyperspectral Image Denoising via Non-Iterative Low-Rank Subspace Representation. IEEE Trans. Geosci. Remote Sens.; 2024; 33, pp. 1211-1226.
31. Ashraf, M.; Chen, L.; Zhou, X.; Rakha, M.A. A Joint Architecture of Mixed-Attention Transformer and Octave Module for Hyperspectral Image Denoising. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.; 2024; 17, pp. 4331-4349. [DOI: https://dx.doi.org/10.1109/JSTARS.2024.3356523]
32. Wang, H.; Peng, J.; Cao, X.; Wang, J.; Zhao, Q.; Meng, D. Hyperspectral image denoising via nonlocal spectral sparse subspace representation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.; 2023; 16, pp. 5189-5203. [DOI: https://dx.doi.org/10.1109/JSTARS.2023.3281808]
33. Sun, L.; Jeon, B.; Soomro, B.N.; Zheng, Y.; Wu, Z.; Xiao, L. Fast superpixel based subspace low rank learning method for hyperspectral denoising. IEEE Access; 2018; 6, pp. 12031-12043. [DOI: https://dx.doi.org/10.1109/ACCESS.2018.2808474]
34. Cao, C.; Yu, J.; Zhou, C.; Hu, K.; Xiao, F.; Gao, X. Hyperspectral image denoising via subspace-based nonlocal low-rank and sparse factorization. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.; 2019; 12, pp. 973-988. [DOI: https://dx.doi.org/10.1109/JSTARS.2019.2896031]
35. Zheng, Y.-B.; Huang, T.-Z.; Zhao, X.-L.; Chen, Y.; He, W. Double-factor-regularized low-rank tensor factorization for mixed noise removal in hyperspectral image. IEEE Trans. Geosci. Remote Sens.; 2020; 58, pp. 8450-8464. [DOI: https://dx.doi.org/10.1109/TGRS.2020.2987954]
36. He, C.; Cao, Q.; Xu, Y.; Sun, L.; Wu, Z.; Wei, Z. Weighted order-p tensor nuclear norm minimization and its application to hyperspectral image mixed denoising. IEEE Geosci. Remote Sens. Lett.; 2023; 20, 5510505. [DOI: https://dx.doi.org/10.1109/LGRS.2023.3322946]
37. Fu, X.; Guo, Y.; Xu, M.; Jia, S. Hyperspectral image denoising via robust subspace estimation and group sparsity constraint. IEEE Trans. Geosci. Remote Sens.; 2023; 61, 5512716. [DOI: https://dx.doi.org/10.1109/TGRS.2023.3277832]
38. Li, M.; Liu, J.; Fu, Y.; Zhang, Y.; Dou, D. Spectral Enhanced Rectangle Transformer for Hyperspectral Image Denoising. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); Los Angeles, CA, USA, 24–27 June 2023; pp. 5805-5814.
39. He, C.; Sun, L.; Huang, W.; Zhang, J.; Zheng, Y.; Jeon, B. TSLRLN: Tensor subspace low-rank learning with non-local prior for hyperspectral image mixed denoising. Signal Process.; 2021; 184, 108060. [DOI: https://dx.doi.org/10.1016/j.sigpro.2021.108060]
40. Zhang, Q.; Zheng, Y.; Yuan, Q.; Song, M.; Yu, H.; Xiao, Y. Hyperspectral image denoising: From model-driven, data-driven, to model-data-driven. IEEE Trans. Neural Netw. Learn. Syst.; 2023; 35, pp. 13143-13163. [DOI: https://dx.doi.org/10.1109/TNNLS.2023.3278866]
41. Yi, L.; Zhao, Q.; Xu, Z. Hyperspectral Image Denoising by Pixel-Wise Noise Modeling and TV-Oriented Deep Image Prior. Remote Sens.; 2024; 16, 2694. [DOI: https://dx.doi.org/10.3390/rs16152694]
42. Bioucas-Dias, J.M.; Nascimento, J.M. Hyperspectral subspace identification. IEEE Trans. Geosci. Remote Sens.; 2008; 46, pp. 2435-2445. [DOI: https://dx.doi.org/10.1109/TGRS.2008.918089]
43. Donoho, D.L. De-noising by soft-thresholding. IEEE Trans. Inf. Theory; 1995; 41, pp. 613-627. [DOI: https://dx.doi.org/10.1109/18.382009]
44. Gower, J.C.; Dijksterhuis, G.B. Procrustes Problems; OUP: Oxford, UK, 2004; Volume 30.
45. Erichson, N.B.; Zheng, P.; Manohar, K.; Brunton, S.L.; Kutz, J.N.; Aravkin, A.Y. Sparse principal component analysis via variable projection. SIAM J. Appl. Math.; 2020; 80, pp. 977-1002. [DOI: https://dx.doi.org/10.1137/18M1211350]
46. Maggioni, M.; Katkovnik, V.; Egiazarian, K.; Foi, A. Nonlocal transform-domain filter for volumetric data denoising and reconstruction. IEEE Trans. Image Process.; 2012; 22, pp. 119-133. [DOI: https://dx.doi.org/10.1109/TIP.2012.2210725]
47. He, C.; Xu, Y.; Wu, Z.; Zheng, S.; Wei, Z. Multi-Dimensional Visual Data Restoration: Uncovering the Global Discrepancy in Transformed High-Order Tensor Singular Values. IEEE Trans. Image Process.; 2024; 33, pp. 6409-6424. [DOI: https://dx.doi.org/10.1109/TIP.2024.3475738]
48. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process.; 2004; 13, pp. 600-612. [DOI: https://dx.doi.org/10.1109/TIP.2003.819861]
49. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process.; 2011; 20, pp. 2378-2386. [DOI: https://dx.doi.org/10.1109/TIP.2011.2109730]
50. Wald, L. Data Fusion: Definitions and Architectures: Fusion of Images of Different Spatial Resolutions; Presses des Mines: Paris, France, 2002.
51. He, W.; Zhang, H.; Zhang, L.; Shen, H. Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens.; 2015; 54, pp. 178-188. [DOI: https://dx.doi.org/10.1109/TGRS.2015.2452812]
52. Maffei, A.; Haut, J.M.; Paoletti, M.E.; Plaza, J.; Bruzzone, L.; Plaza, A. A single model CNN for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens.; 2019; 58, pp. 2516-2529. [DOI: https://dx.doi.org/10.1109/TGRS.2019.2952062]
53. Zhuang, L.; Ng, M.K. FastHyMix: Fast and parameter-free hyperspectral image mixed noise removal. IEEE Trans. Neural Netw. Learn. Syst.; 2021; 34, pp. 4702-4716. [DOI: https://dx.doi.org/10.1109/TNNLS.2021.3112577]
54. Xiong, F.; Zhou, J.; Zhao, Q.; Lu, J.; Qian, Y. MAC-Net: Model-aided nonlocal neural network for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens.; 2021; 60, 5519414. [DOI: https://dx.doi.org/10.1109/TGRS.2021.3131878]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Recently, hyperspectral image (HSI) mixed denoising methods based on nonlocal subspace representation (NSR) have achieved significant success. However, most of these methods focus on optimizing the denoiser for representation coefficient images (RCIs) without considering how to construct RCIs that better inherit the spatial structure of the clean HSI, thereby affecting subsequent denoising performance. Although existing works have constructed RCIs from the perspective of sparse principal component analysis (SPCA), the refinement of RCIs in mixed noise conditions still leaves much to be desired. To address the aforementioned challenges, in this paper, we reconstructed robust RCIs based on SPCA in mixed noise circumstances to better preserve the spatial structure of the clean HSI. Furthermore, we propose to utilize the robust RCIs as prior information and perform iterative denoising in the denoiser that incorporates low-rank approximation. Extensive experiments conducted on both simulated and real HSI datasets demonstrate that the proposed robust RCIs guidance and low-rank approximation method, denoted as RRGNLA, exhibits competitive performance in terms of mixed denoising accuracy and computational efficiency. For instance, on the Washington DC Mall (WDC) dataset in Case 3, the denoising quantitative metrics of the mean peak signal-to-noise ratio (MPSNR), mean structural similarity index (MSSIM), and spectral angle mean (SAM) are 36.06 dB, 0.963, and 3.449, respectively, with a running time of 35.24 s. On the Pavia University (PaU) dataset in Case 4, the denoising quantitative metrics of MPSNR, MSSIM, and SAM are 34.34 dB, 0.924, and 5.505, respectively, with a running time of 32.79 s.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details



1 Institute of Intelligent Control and Image Engineering, Xidian University, Xi’an 710071, China;
2 Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China;