1. Introduction
Data acquisition via satellite or aerial imagery is a prolific aspect of remote sensing. Their increased capabilities, in terms of both new methods and hardware (e.g., new space missions), are advancing our understanding of many aspects of Earth’s phenomena [1,2,3]. Within this context, one of the key Earth Observation (EO) programs is the European Union’s Copernicus program. Managed by the European Commission, Copernicus represents a pioneering effort in Earth Observation with space-and-ground-based observations. The Copernicus satellites are called Sentinels. The vast number of data collected by the Copernicus satellite fleet can be distinguished by their spectral, spatial, and temporal resolutions, and they provide valuable insights into the dynamics of Earth’s ecosystems and environments [4] for various geographic locations across the globe [5,6,7,8].
Two of these Sentinel missions, Sentinel-2 and Sentinel-3, carry multi-spectral optical imaging instruments. Sentinel-2 (S2) embeds the Multi-Spectral Instrument (MSI), which is specialized in capturing detailed multispectral (MS) images of land, which is crucial for applications such as agriculture, forestry, and disaster management. For example, S2 images can help monitor crop health [9], assess forest density [10], and evaluate damage after natural disasters [11]. Onboard Sentinel-3 (S3), the Ocean and Land Color Instrument (OLCI), as well as the Sea and Land Surface Temperature Radiometer (SLSTR), primarily monitor ocean parameters such as the sea surface temperature and ocean color [12], support applications like tracking ocean currents, monitoring coral reef health, and studying the impacts of climate change on marine ecosystems, all of which are used for marine and climate research. Additionally, S3 atmospheric monitoring capabilities help in understanding and forecasting atmospheric conditions, which are essential for climate studies and air quality monitoring [13].
The S3 OLCI and SLSTR instruments, with 21 bands in total, have been designed to provide a higher spectral resolution, at the expense of a lower ground sampling distance (GSD) of 300 m. The S2 MSI instrument, on the other hand, has been designed for applications characterized by high granularity and complexity and therefore has a higher maximum GSD of 10 m, but at the expense of its spectral resolution (12 bands). This trade-off between spatial and spectral resolution in imaging systems is delicate, often resulting in data with moderate GSD. This can significantly impact the effectiveness of various applications. Several studies have been conducted to address this issue. For instance, some perform data fusion to achieve super-resolution for S2 images but do not include spectral enhancement [14]. The complementary multi-spectral imaging provided by S2 and S3 can be used to generate a fused data product that contains the highest level (combining the 10 m GSD from S2 and the 21 spectral bands from S3) of spectral and spatial information as provided by each instrument.
Multi-/hyperspectral image fusion algorithms are designed to extract information from different datasets (e.g., taken with different multi-spectral sensors) and create new datasets with improved spatial and/or spectral properties. These algorithms can be broadly divided into four groups: (i) pansharpening ([15,16,17]), (ii) estimation (mainly Bayesian) [18,19,20]), (iii) matrix factorization (including tensor decomposition [21,22,23]), and (iv) deep learning (DL), [24,25,26]. The subtleties of multispectral (image) fusion are discussed in [24,27,28].
Although deterministic approaches in image fusion (i.e., non-delective-learning) have proven to be efficient, reliable, and mostly require small computing time, they often rely on specific assumptions about statistical properties or relationships among different image sources. Additionally, these methods are typically designed with fixed parameters or models, lacking the adaptability needed for diverse datasets or varying environmental conditions, resulting in performance degradation with data outside the treated scope. Furthermore, many deterministic fusion techniques require manual feature extraction, which can be time-consuming and inadequate for capturing all relevant information. These methods also face challenges in capturing complex and non-linear relationships between image sources, particularly in cases with high variability and/or fine-grain patterns, leading to issues with generalization across different types of imagery and new scenes.
In this work, we develop DL image fusion techniques for S2 and S3 multi-spectral imaging data, leveraging synthetic training and validation data generated using EnMap’s hyperspectral images as ground truth. Our primary focus is on the quality of the fused products, particularly their ability to accurately represent scientific information (cf. Section 6.3), along with its accuracy and robustness, rather than on the performance metrics of the architecture or network.
A graphic illustration of the challenge is shown in Figure 1.
The fused product from S2 and S3 can be applied to any field benefiting from a hyperspectral product refined at a maximum of 10 m GSD. These positive impacts range from satellite calibration to allowing for more precise detection of changes in land use, vegetation, and water bodies. This increased detail aids in disaster management, providing timely and accurate information for responding to floods, wildfires, and other natural events. Additionally, it supports urban planning and agricultural practices by offering detailed insights into crop health and urban development ([29,30,31,32]).
This work is organized as follows. First, Section 2 reviews the concept of multispectral image fusion. Section 3 presents the datasets and their preparation for training, validation, and inference. The implemented method is described in Section 4, and the results are presented in Section 5. Section 6 and Section 7 discuss the results and present our conclusions, repsectively.
2. Multispectral Image Fusion
The main objective (target) of multispectral–hyperspectral data fusion is to estimate an output tensor combining high spectral and spatial resolutions. This is a generic problem described, for example, in [24,33,34]. This tensor is denoted as , where H and W are the spatial dimensions and L is the spectral dimension. is also referred to as the High-Resolution Hyperspectral Image (HrHSI). The other “incomplete” data will be the High-Resolution Multispectral Image (HrMSI) and Low-Resolution Hyperspectral Image (LrHSI), denoted as and , respectively. h, w, and l represent the low spatial and spectral resolutions. According to the Linear Mixing Model (LMM), we can establish a relationship between , and . The LMM assumes that every pixel of a remote sensing image is a linear combination of pure spectral entities/signatures, often called endmembers. In a linear dependency, these endmembers have coefficients, also referred to as abundances. For each pixel, the linear model is written as follows (c.f. [35]):
(1)
where-
is the reflectance at spectral band i,
-
is the reflectance of the endmember j at band i,
-
is the fractional abundance of j,
-
is the error for the spectral band i (i.e., noise etc).
Equation (1), written in vector form, minus the error term (assuming perfect acquisition for simplicity, is expressed as:
(2)
Similarly, for LrHSI,
(3)
with the same spectral signatures as Y but with lower spatial resolution. Hence, the matrix , with p being the number of endmembers, consists of low-spatial-resolution coefficients.The HrMSI will have the same properties but with the opposite degradation,
(4)
where is the endmembers matrix, with p being the number of spectral signatures and l being the number of spectral bands.The LrHSI can be considered as a spatially degraded version of the HrHSI written as:
(5)
with as the spatial degradation matrix, referring to the downsampling and blurring operations. Furthermore, can be considered as the spectral degradation matrix, giving:(6)
represents the LrMSI.The dependencies between HrHSI (), HrMSI (), and LrHSI () are illustrated in Figure 2.
From a deep learning point of view, the objective is thus to find the non-linear mapping , referred to as our neural network, which gives us an approximation of Y given and , called our prediction .
(7)
(8)
and with being the network parameters. In this context, the S2 image is analogous to the HrMSI and the S3 image to the LrHSI. The HrHSI refers to the Ground Truth.3. Materials
When tackling the fusion task with a DL approach, a major challenge emerges, the absence of ground truth. It is necessary, when training a neural network, to compare the current prediction to a reference in order to calculate the difference between the two and subsequently update the model weights. This process, known as back-propagation [36], allows the neural network to update its parameters and converge towards the minima. Concerning the Sentinel-2 and 3 missions, there is no image available already combining the full LrHSI spectral definition and the HrMSI spatial resolution. Section 3.1 presents our approach to obtaining a ground truth (GT) for training. The synthetic dataset generation is detailed in Section 3.3.
3.1. Ground Truth
To teach neural networks to fuse HrMSI and LrHSI, a ground truth (GT) is needed that combines the high spatial and high spectral resolutions (see Section 2). Datasets of this kind are available in the context of EO, such as the Urban dataset [37], the Washington DC Mall dataset [38], and the Harvard dataset [39]. However, the main challenges with these are their limited sizes, sometimes covering less than a kilometer, and their spectral ranges, which, in most cases, do not encompass the extended range offered by S2 and S3.
Because our objective is to generate physically accurate data, we need a training dataset with the right spectral coverage, diverse images, and large enough areas covered. To remedy the lack of appropriate data, specifically prepared and/or complete synthetic datasets are needed.
In this study, the main approach was to synthetically generate Sentinel-2 and Sentinel-3 approximations, together with a ground truth using representative hyperspectral data. Section 3.3 describes the procedure in detail and portrays the deep learning training process. It also shows limitations due to the theoretical spatial definition obtained from the input data and the spectral range. This approach can be characterized as an attempt to get as close as possible to reality and make the neural network learn the physics behind the Sentinel sensors.
An alternative approach to compensate for the weaknesses of the above method was also explored (Section 3.4). This approach involves using a well-known dataset for hyperspectral unmixing and data fusion, transforming it, and analyzing the network’s performance on EO image fusion. Specifically, we use the multispectral CAVE dataset [40]. These input data enable us to push the theoretical limits of spatial resolution in fusion and to test the network’s ability to generalize data fusion beyond the EO context. Based on the detailed performance evaluations of the EO-trained network presented in Section 5 and Section 6, readers should consider the CAVE-trained architecture as a benchmark reference. This comparison facilitates an in-depth analysis of our synthetic training approach by providing a reference point against alternatives derived from a more generic image fusion dataset. By doing so, readers can better understand the relative efficacy and benefits of our synthetic training method in contrast to more conventional approaches.
3.2. Input Multi-Spectral Data
3.2.1. Satellite Imagery
-
Sentinel-2
The Sentinel-2 MultiSpectral Instrument (MSI) gathers data across 12 spectral bands, spanning from visible to shortwave infrared wavelengths (412 nm to 2320 nm).
The MSI product proposes reflectance values (the HrMSI) at spatial resolutions from 10 to 60 m. The image is atmospherically corrected and orthorectified at 2A level (L2A) [41].
-
Sentinel-3
The Sentinel-3 SYNERGY products include 21 spectral bands (from 400 nm to 2250 nm) at a 300 m GSD, combining data from two optical instruments: the Ocean and Land Color Instrument (OLCI) and the Sea and Land Surface Temperature Radiometer (SLSTR). The LrHSI. Like Sentinel-2, the gathered data type is in reflectance values, L2A atmospherically corrected and orthorectified.
In this study, we used the Copernicus Browser (
https://browser.dataspace.copernicus.eu , accessed on 14 August 2024) to retrieve overlapping S2 and S3 image pairs with acquisition times within 5 min of each other and with a cloud coverage fraction of a maximum of 5%.
-
EnMAP
EnMAP is a satellite mission dedicated to providing high-resolution, hyperspectral Earth Observation data for environmental and resource monitoring purposes. EnMAP’s spectrometer captures detailed spectral information across 246 bands, going from 420 nm to 2450 nm. The satellite has a 30 km swath width at a GSD of 30 m, with a revisit time at nadir of 27 days and 4 days off-nadir. The spectral resolution is significantly more detailed than Sentinel-2 MSI (12 bands) and Sentinel-3 SYNERGY (21 bands), and the GSD is 3 times below S2 but 10 times better than S3, giving us a good compromise for our experimentation.
3.2.2. CAVE
The CAVE dataset consists of a diverse array of interior scenes featuring various physical objects, captured under different lighting conditions using a cooled CCD camera. This dataset does not include any Earth Observation images but is a well known and commonly used dataset in multi-spectral image fusion ([25,42,43]). The images comprise 31 spectral bands ranging from blue to near-infrared (400 nm to 700 nm).
Figure 3 illustrates a typical example from the CAVE dataset, showing an image of a feather alongside its mean spectral curve, which represents the average values across all 31 bands. This provides a more comprehensive perspective of the scene compared to conventional RGB imaging.
3.3. Synthetic EO Dataset Preparation
Synthetic S2 and S3 training data, as well as ground truth data (for the fusion product), were prepared using real hyperspectral satellite imagery (with hundreds of spectral channels and GSD of 30 m or better) obtained with the Environmental Mapping and Analysis Program (EnMAP) satellite ([44]).
The EnMAP imagery, like S2 and S3, is also in reflectance values, L2A atmospheric correction, and orthorectified. Because the acquisition hardware is a spectrometer, EnMAP gives us access to the true spectrum of the area being captured. EnMAP hyperspectral datawere retrieved from the EnMAP GeoPortal (
Synthetic S2 and S3 products were derived from the EnMAP high-resolution spectra by convolving with each of the S2 and S3 spectral response functions (SRFs) provided by the European Space Agency (
(9)
with being the product of the integrated SRF and spectrum at the band position.Figure 5 shows an example EnMAP spectrum together with the MSI, OLCI, and SLSTR SRF curves. An example of the resulting synthetic MSI images is given in Figure 6.
A limitation remains: Sentinel-3 has a spectral range going farther into the blue part of visible light, going beyond the capabilities of the EnMAP data’s spectral range. Hence, the first 2 bands could not be simulated, resulting in S3 and ground truth spectra with 19 bands.
After simulating all bands for all products, two synthetic tensors are generated from each EnMAP spectral cube:
A Sentinel-2 MSI simulation, 12 bands, 30 m GSD;
A Sentinel-3 SYNERGY simulation, 19 bands, 30 m GSD.
To check the fidelity of our synthetic S2 and S3 data, we compared them against true observed S2 and S3 data for areas that are covered by both EnMap and Sentinel instruments. The synthetic and true images are compared using the Spectral Angle Mapper (SAM) and the Structural Similarity Index (SSI). We show in Figure 7 an example of a pair of synthetic S2 data against corresponding S2 observations, as well as the corresponding EnMap, synthetic, and true spectra. We managed to check the sanity of the ground truth for over 80 percent of the pixels used in the training, validation, and testing sets, and never found values of SAM or SSI indicative of strong deviations between the synthetic and the true S2 and S3 images and spectra, where strong deviation could have been detected via a SAM with values above 0.1, and an SSI with values below 0.8.
This technique allows us to create a Sentinel-3 image with 10 times better spatial resolution; this datacube will be considered as our ground truth. To retrieve the true Sentinel-3 GSD from it, we apply a 2D Gaussian filter, degrading the spatial resolution to 300 m (The Gaussian filter acts as a low-pass filter in the frequency domain, efficiently eliminating high-frequency components that are unnecessary at lower resolutions. In essence, the 2D Gaussian filter is used for its ability to smooth, reduce noise, and preserve image integrity during significant resolution changes).
It is important to recognize that this approximation will never be perfect due to intrinsic differences between the S2 and EnMAP instruments that are not fully addressed by integrating the SRF, such as the calibration of the instruments. However, we are confident that the fidelity of our synthetic datasets is high when compared to the true data. Additionally, there are inherent limitations due to the theoretical spatial definition derived from the input data (30 m). The simulated data, while providing a close approximation, may not completely capture the fine spatial details present in real-world observations (10 m). However, given the diversity and pertinence of the data used, we believed that the network would be able to generalize sufficiently to overcome these limitations. The results (Section 5) show predictions performed at the training resolution for consistency. Going further, the training GSD is described in the discussion (Section 6). Nonetheless, further refinement and validation against real-world data could be necessary to enhance the accuracy and generalizability of the deep learning model; this aspect is not explored in this study.
The data augmentation and preparation pipeline is summarized in the dataloader presented in Figure 8. From this, 10,000 S2/S3/GT image triplets were extracted and used as our training dataset.
3.4. Synthetic Non-EO (CAVE) Dataset Preparation
The CAVE dataset was selected mainly for its popularity and variety. The use of the CAVE dataset as reference training serves as a portrayal of fusion examples made with a network trained on standard data.
There are two main challenges with this approach: the image spectral range does not match that of S2 and S3, and the scenes are non-EO. To tackle the first issue, data preparation is needed.
To align the spectral range of the CAVE images with that of the Sentinels, we used spline interpolation to enhance the spectral definition and adjust the output to match the Sentinel-2 and 3 spectral ranges. It is important to note that in this scenario, the data preparation is entirely synthetic, and the images do not represent actual EO observations, making it impossible to approximate the true responses of the Sentinels. Hence, unlike the preparation of EO data, all 21 bands were generated despite the spectral range of CAVE not aligning with that of the Sentinels.
The next step was to apply the same SRF integration techniques as in the previous section (i.e., Equation (9) to the CAVE spectra to generate synthetic S2 and S3 data. The same data preparation and augmentation steps (Figure 8) were applied to retrieve our HrMSI/LrHSI/HrHSI training and validating triplets (1000 images were extracted for training).
4. Method
4.1. The Neural Network
The neural network architecture selection was performed regarding specific criteria:
Overall fusion performances on generic datasets: Neural networks trained on the fusion task have converged, most of the time, on generic datasets like CAVE. To the authors’ knowledge, no study has been conducted for performances on synthetic datasets like the one we have created (Section 3.3).
The ability to generalize: Because of the synthetic nature of the training dataset, the training was carried out on images that do not perfectly fit reality. The neural network needs to have good generalization capabilities to apply the fusion process to diverse data.
The number of network parameters: The fusion task is complex and involves massive inputs (images with 12 to 21 bands, thousands of pixels each); a network with a lot of parameters will inevitably involve massive computational times and potentially GPU memory overflow.
Neural networks have been historically used to tackle this task, like CNN-based networks: Guided-Net [46], originating from image super-resolution [47]; denoiser (CNN-FUS [48]); and many other architectures (e.g., [26,49,50]). Some very promising ones have emerged, wuch as CUCaNet [51] and Fusformer [52].
Fusformer was the selected architecture because of its high overall performance across all categories (accuracy, generalization capabilities, and parameter number). This network showed strong effectiveness at conventional fusions (on CAVE images for example) with rather fast training times (mainly due to the small number of parameters). This network uses a recent and effective deep learning technique, the Attention Mechanism, classifying it inside the Transformer network. A thorough explanation of this architecture is beyond the scope of this paper; here the main concepts behind Transformers networks are summarized.
In recent years, Transformer networks have significantly advanced AI in fields like natural language processing and computer vision. At the core of these models is attention, which allows networks to focus on specific input elements, mimicking human cognitive abilities. Originally introduced by [53] for natural language tasks, Transformers surpassed existing RNNs. This mechanism was later adapted in the Vision Transformer (ViT) for computer vision tasks [54]. Fusformer applied the ViT to the fusion task.
The Transformer block, inside Fusformer, is symbolized by the green block within the architecture in Figure 9.
4.2. Transformer Training
Regarding the synthetic EO dataset, Fusformer (where the implementation comes from the paper’s code repository, with no modification aside from the shape of the inputs and outputs) was trained for 150 epochs on 10,000 S2/S3/GT triplets with 1000 validation images. The calculation was carried out on an NVIDIA Geforce RTX 3090, 24MiB memory. To assess the performance of a fusion model, in the presence of ground truth, various metrics are employed, including Root Mean Square Error (RMSE), Peak Signal-to-Noise Ratio (PSNR), Error-Relative Global Absolute Scale (ERGAS), and Spectral Angle Mapper (SAM). All training and validation were performed using the Python programming language. All of these metrics are described in detail in Appendix B.
It is important to note that, contrary to typical RGB images, hyperspectral data are—for the same number of spatial pixels—more demanding in terms of GPU RAM. Thus the networks were trained on 20 m GSD for the Sentinel-2 images, dividing by 2 the pixel number. As an example, without downsampling S2, for an input S2 image of 150 × 150, the neural network processed 150 × 150 × 12 pixels for S2 and 150 × 150 × 21 for S3. Figure 9 shows that Sentinel-3 is upsampled to match the Sentinel-2’s shape, giving us 742,500 pixels to compute for an image of just 150 × 150 pixels. Because of the extensive memory load, the batch size was set to 1. The learning rate was left to its default value, from the original Fusformer code, at with a weight decay set at .1 at all 30 epochs. In addition to the high variety of the training dataset, which drastically lowers the overfitting problem (particularly in front of synthetic data), a dropout rate was added and fixed to 0.2.
Figure 10 and Table 1 show the convergence of metrics with an example of better spatial features definition throughout training.
5. Results
In this section, we present contextual S2 and S3 fusions. By contextual, we mean that the input images are true S2 and S3 images coming from the Copernicus request hub, and the following results demonstrate the network capabilities on non-synthetic images.
We apply the models obtained in the previous section to real S2 and S3 multispectral images to obtain new S2-S3 fusion products.
Inference Results—Natural-Color Composite Images and Spectra
In Section 4.2, we explain that because of computing power limitations, the input S2 images (CAVE training) were downgraded to a GSD of 20 m. For the input data, there is a resolution factor of 15 between the HrMSI and the LrHSI. In the case of the EO training, the native EnMAP resolution is 30 m GSD (Section 3.3), hence restricting the input resolution difference to a factor 10. For consistency, the figures and metrics presented in this section are performed with the same resolution factor as seen in training.
In contrast to the training with the EnMAP-based ground truth, the CAVE training set offers a theoretically “infinite” spatial resolution for Earth Observation data, with interior scene resolution reaching approximately millimeter-level precision. Despite being trained on unrelated data, the primary expectation for network generalization was that the Transformer could still effectively transfer spatial features, as certain elements within the CAVE images have resemblances to EO features.
Figure 11 (top panel) shows four RGB composites with the following bands:
S3: 4, 7, 10 (490 nm, 560 nm, 665 nm)
S2: 2, 3, 4 (490 nm, 560 nm, 665 nm)
AI fused: 4, 7, 10 for CAVE trained and 2, 5, 8 for EO trained (see Section 3.3 for the two missing bands)
The bottom panel displays the mean spectra for all images with standard deviation at each band. Table 2 shows the metrics presented in Appendix C. Please note that the Inception Score is unavailable for these tests. This is due to the classifier being trained on 10 m GSD images, making it unsuitable for handling predictions at 20 m or 30 m.
Several metrics are used to assert the accuracy of the inference. One major difference is that unlike during training, we do not have access to a ground truth; therefore, all of the previously used metrics (Section 4.2) cannot be calculated anymore. For the inference, we use three different metrics, the Jensen–Shannon Divergence, SAM, and SSI. Note that here, the SSI is calculated on the downgraded S2 panchromatic image to 20 m for CAVE and 30 m for EO. More details for each of these metrics are given in Appendix C.
These first results show that the EO-trained Transformer fits the S3 mean spectrum with around the same standard deviation values. This behavior is expected (reconstructed images with the S3 spectrum and the S2 GSD). Slight differences can be witnessed between the AI fused product and the S2 composites, which are explained by the intrinsic deviations from the S2 and S3 spectra. Although the bands used to create the composites are at the same wavelength, they do not always have the same responses due to some disparities: different instruments, calibration, visit time, filter width, etc. The SSI (Table 2) of 0.988 (best is 1) reflects a good reconstruction of spatial features , usually slightly below the CAVE training, which seems to perform better overall in fine-grain detail transfer (potentially explained by the CAVE images’ millimeter-level resolution). The composite colors will most of the time be closer to the S2 image; the underlying cause of this is that the CAVE-trained model tends to reconstruct spectra closer to the S2 reflectance than S3, leading to significantly lower metrics in the spectral domain (2 times lower J-S Divergence and around 6 times lower SAM).
The white square on the S2 composite, Figure 11, is zoomed in on in Figure 12; the comparison is made on a 30 m GSD for S2 to truly show the fusion accuracy with the given spatial information at the inference time.
Examples like Figure 13 show significant spectral deviations coming from the CAVE-trained network; such deviations were not observed on the EO-trained Transformer.
The CAVE dataset spectra tend to be flat due to the scene’s chemical composition. The network has difficulties reconstructing spectra deviating from the examples seen throughout training (e.g., Figure 13) where the mean spectra bump around 700 nm, a common behavior when dealing with dense chlorophyll emission (called “red edge”). The CAVE-trained network’s spectral accuracy drastically decreases in these situations (also shown in Table 3).
In the case of the Amazonia zoomed-in area, shown in Figure 14, some spatial features were not accurately reconstructed, e.g., the cloud shadow in the upper-left corner. One explanation could be that the neural network was trained on almost cloudless data. Including more (partially) cloudy images in the training dataset could perhaps give better results.
We stress that Sentinel-2 and Sentinel-3 images cannot be taken at the exact same time, which can lead to potential spatial discrepancies between the two. To address this, we selected the closest possible acquisition dates for both the S2 and S3 images, operating under the assumption that a 5 min difference is insufficient for significant spatial changes to occur. However, if the image acquisition times are significantly different, users should be aware that the network may reconstruct spatial features that are not present in one of the input images.
These results lead to the following conclusions on the trained networks:
Both networks can perform data fusion at the training GSD (30 m).
The CAVE-trained Transformer has slightly better spatial reconstructions at the training GSD.
The CAVE-trained network fused spectra stick to Sentinel-2, while the EO network sticks to Sentinel-3.
The spectral reconstruction capability of the EO-trained Transformer surpasses that of the other by several orders of magnitude.
The EO network is more robust to diverse inputs and GSD (discussed in Section 6.
The CAVE network showcased spatial and spectral “hallucinations” at 30 and 10 m. The EO-trained network remained stable.
6. Discussion
The results presented in the previous section (Section 5) demonstrated the accuracy of the network outputs in their training context. Here, we discuss the neural network’s ability to generalize beyond the training scope. Three particular cases are discussed. First, we show that it is possible to push the neural network to fuse images beyond the GSD seen during training (Section 6.1). Second, we discuss wide field predictions and Sentinel-3 image retrieval by degrading the fused outcome, allowing us to calculate distances and assert the deviation (Section 6.2). Thirdly, land cover segmentation is performed on both the fused and Sentinel-2 products to assess the impact on NDVI products (Section 6.3).
6.1. Inference (Fusion) beyond the Network Training Resolution
It became apparent, through testing, that it is possible to make the neural network fuse images with a smaller GSD than the one seen during training. The Transformer shows remarkable generalization capabilities and manages to transfer thinner spatial features to the bands to reconstruct. Figure 15 gives a fusion example for an urban scene (Los Angeles) with the maximum Sentinel-2 GSD (10 m). This example shows that the AI fusion not only generalizes well to higher spatial resolutions but also achieves good results for heterogeneous scenes (i.e., where the per-band standard deviation is high). The ability of the network to reconstruct scenes with fine-grained details and high variance at 10 m resolution is further illustrated in Figure 16.
Another way to investigate the network output’s imperfections is to analyze them in the frequency domain. Figure 17 shows the Discrete Fourier Transforms (DFTs) for the 665 nm band. Some recognizable features are missing in the AI-fused DFT compared to that of the Sentinel-2 one. Although the reconstruction has a good fidelity level in low frequencies, some structures are missing in medium and high magnitudes. This is highlighted in the difference plot at the right, extracting the pixels with the highest discrepancy. The network’s output is used as a highpass filter, summed with the up-scaled Sentinel-3 image afterward; it is natural to think that the main difficulty is to reproduce the frequencies necessary for sharp edges and fine-grain feature reconstruction. Improving the hyperparameters or implementing a deeper neural network architecture (more attention heads for example) might result in a better caching of medium and high frequencies for an improved fusion.
It is important to underline that the CAVE-trained Transformer has a better SAM metric than the EO-trained model (Table 4) but shows spatial feature “hallucinations” (colored pixels unrelated to the surroundings) not seen in the latter. This behavior is shown in Figure 18 where the hallucinations are highlighted (cf. Figure 19 for a close-up). This effect was not encountered with the EO-trained Transformer (leading to a much higher SSI value, as shown in Table 5). A potential explanation comes from the fact that the EO-trained dataset is much larger and more diverse than the CAVE one, making generalization easier.
In summary, The EO-trained network globally showed the ability to reconstruct spatial features at 10 m GSD, like Figure 11, a close-up example at Figure 12, and the same zoomed-inarea at GSD 10 m shown in Figure 20. Additional fusion examples are given in Appendix F, using the EO-trained neural network only, to illustrate, for instance, the spatial and spectral variety of scenes.
6.2. Wide Fields and Pseudo-Invariant Calibration Sites
An extension of the above analysis is to conduct fusions on wide images covering several kilometers (getting closer to the true Sentinel-2 and 3 swaths). From these broad fusions, it is possible to retrieve a Sentinel-3-like image by intentionally degrading the result. By doing so, we can calculate the distance measures from the degraded output and the true Sentinel-3 image.
For these comparisons, we have selected areas included in the Pseudo-Invariant Calibration Sites (PICSs) program ([55,56,57]). These regions serve as terrestrial locations dedicated to the ongoing monitoring of optical sensor calibration for Earth Observation during their operational lifespan. They have been extensively utilized by space agencies over an extended period due to their spatial uniformity, spectral stability, and temporal invariance. Here, we chose two sites, Algeria 5 (center coordinates: N 31.02, E 2.23, area: 75 × 75 km) and Mauritania 1 (center coordinates: N 19.4, W 9.3, area: 50 × 50 km).
It is mportant to note that this is not an iterative process; the Sentinel-3 image is indeed among the network’s input. It would be natural to think that degrading the output to retrieve an input-like image is a regressive endeavor. However, this is carried out mainly to show that we are not deviating significantly from the original spectral data. To approximate the Sentinel-3 GSD, we first convolve the HrHSI with a Gaussian filter and then pass it through a bicubic interpolation. The Gaussian kernel and the interpolation are defined in Appendix D.
The main interest of this process is to go back to our original Sentinel-3 image, giving us the possibility to use a GT to determine the distance between the Sentinel-3 and the degraded fusion result. The metrics used are the RMSE, Euclidean distance, and cosine similarity (cf. Table A1).
All of the following fusions are performed with the EO-trained network, mainly because its mean spectra are closer to Sentinel-3, leading to better results when trying to retrieve GSD 300 SYNERGY.
Because we cannot infer images this big, the fusion was performed using mosaic predictions, with 150 × 150-pixel sub-images with a 20-pixel margin overlap; for Figure 21 and Figure 22, predictions for the 256 sub-images were carried out in 2:41 min.
Figure 21Twenty-kilometer-wide GSD 10 m and 300 M fused product inside the Algeria CEOS zone (top panels). The Sentinel-3 and GSD 300 AI fused image mean spectra are also displayed in the bottom panel. The corresponding metrics for this inference are in Table 6.
[Figure omitted. See PDF]
Figure 22Twenty-kilometer-wide GSD 10 and 300 fused products inside the Mauritania CEOS zone. The Sentinel-3 and GSD 300 AI fused image mean spectra are also displayed in the second row. Metrics for this inference are displayed Table 7.
[Figure omitted. See PDF]
Other wide fields were tested, and some of them were selected for their dense and varied spatial features, like in urban areas. A typical example is depicted in Figure 23.
This inference took 37 s for 64 mosaic sub-images. The metrics are listed in Table 8.
To conclude this section on wide fields, we show visually (with RGB composites) and with distance metrics that degrading the fused product to simulate a 300 m GSD gives only a small deviation from the true Sentinel-3 data, e.g., with SSI and cosine similarity always close to the best value (best is 1).
6.3. Normalized Difference Vegetation Index Classifications
Through a comparative analysis, we can evaluate the non-regression of our network and ensure that accuracy is maintained between Sentinel-2 and the fused product at 10 m. The Normalized Difference Vegetation Index (NDVI) is a numerical indicator used in remote sensing to assess vegetation health and density. It measures the difference between NIR and red light reflectance, providing insights into vegetation health and biomass. It can also easily distinguish green vegetation from bare soils. NDVI values typically span from −1.0 to 1.0. Negative values signify clouds or water areas, values near zero suggest bare soil, and higher positive NDVI values suggest sparse vegetation (0.1–0.5) or lush green vegetation (0.6 and above) ([58,59]). Using our trained neural network and a segmentation ground truth over a specific area, it is possible to compare the NDVIs derived from both the fused and Sentinel-2 products with the classification GT. For the fused product, we benefit from the Sentinel-3 spectral bands to compute the NDVI. We recall that the main difference between the Sentinel-2 NDVI and the Sentinel-3 NDVI is that, from the NDVI definition, , where is the pixel reflectance value at NIR and is the pixel reflectance value at red wavelength, we can combine Sentinel-3 bands to extract the NIR and red factors. This process is not possible with Sentinel-2 due to its sparse spectra. For the AI-fused product, we used the mean value of the bands 15, 14, and 13 to collect the NIR reflectance and the bands 8, 7, and 6 mean value for the red reflectance. For Sentinel-2, only one band was used for the NIR (band 7), and one for the red (band 3). Figure 24 shows the NDVI matrices derived from the AI fused (EO trained) and Sentinel-2 products. Both are compared to the area ground truth, retrieved from the Chesapeake dataset [60].
The error is assessed by performing a Jaccard score calculation, commonly used in classification accuracy measurements [61]. It calculates the absolute values of the intersection of the two classified sets (let A be the predicted NDVI and B the GT) over their union, defined as
(10)
Note that, even though the retrieved Sentinel-2 and Sentinel-3 images are close in time to the ground truth acquisition time, they do not perfectly overlap. This can lead to slight differences between observed elements and the GT statements. The Jaccard score was calculated regardless of this de-synchronization. The Jaccard score for the AI-fused product is 0.340, while the Sentinel-2 score is 0.337 (the best score is 1). Despite being minor, the variance in accuracy underscores a slight improvement achieved with the fused product, primarily attributed to the enhanced spectral definition, facilitating the collection of additional information. Another example is shown in Figure A2.
7. Summary and Conclusions
In this study, we presented a new DL methodology for the fusion of Sentinel-2 and Sentinel-3 images utilizing existing hyperspectral missions, particularly EnMAP, to address the absence of varied ground truth images in the Earth Observation image fusion discipline. Our approach aimed to reconstruct images embedding the Sentinel-3 spectra along the Sentinel-2 spatial resolution. To this end, we customized an existing Transformer-based neural network architecture, Fusformer.
To emphasize the importance of using contextual data, we trained our neural network using two distinct training and validation datasets. For the first training set, we created a synthetic contextual reference dataset, including ground truth, using a large variety of hyperspectral EnMAP images. In the second training, we used the CAVE database, consisting of multi-spectral images of interior scenes, to create a generic, non-EO-specific, training and validation training set. This comparison is also useful since the CAVE data are ubiquitously used for bench-marking (multi-/hyperspectral) image fusion and super-resolution algorithms.
Through comprehensive experimentation and evaluation, we observed notable differences in the performance of the two neural networks when applied to the tasks of Sentinel-2 and Sentinel-3 image fusion. The network trained on the synthetic EO dataset outperformed its counterpart trained on non-EO data across various evaluation metrics. In particular, inference with the non-EO model gave rise to “hallucinations”, pixels showing erratic spectral behavior not seen for the EO contextual model. Furthermore, our selected neural network demonstrated the potential to fuse Sentinel-2 and Sentinel-3 images beyond the spatial resolution encountered during training. Despite this resolution disparity, our approach extended the fusion capabilities to higher resolutions, showcasing its adaptability and robustness in handling varying spatial scales inherent in Earth Observation data. Inference on wide fields and Pseudo-Invariant Calibration gave excellent results, which is a first step towards an operational implementation of S2-S3 data fusion. Lastly, we looked at a practical example, NDVI classification, to illustrate how S2-S3 fusion products could potentially improve EO applications and services.
Our findings highlight the potential and importance of generating synthetic contextual (EO) training input, as well as the Transformer-based neural networks, to improve the fusion of multi-spectral remote sensing observations. Hyperspectral missions can play a key role in providing necessary ground truth.
This approach not only facilitates the integration of complementary information from different satellite sensors but also contributes to advancing the capabilities of EO data analysis and interpretation. However, limitations do exist.
The study’s limitations are primarily rooted in the synthetic nature of the training data, which introduces biases that may not fully capture real-world fusion scenarios. Moreover, the reliance on Sentinel-2 and Sentinel-3 image pairs with small temporal differences restricts the broader applicability of the methodology, as it diminishes the potential for fusing images with larger temporal gaps. Finally, due to the lack of true ground truth data, it remains challenging to definitively validate the results at the Sentinel-2 ground sample distance (GSD) of 10 m, leaving some uncertainty about the model’s accuracy and effectiveness. Nonetheless, the approach demonstrates significant promise by showcasing the capabilities of multi-spectral fusion using deep neural networks trained on synthetic datasets, potentially enhancing EO application and demonstrating the potential for further advancements in EO data analysis and interpretation.
Training on synthetic data, even when sourced from a different instrument from those used at inference, presents an opportunity to enhance models. Further research is necessary to maximize the performance and scalability of both the data augmentation and preparation pipeline and the network architecture itself. Additionally, generalizing our approach across different EO missions and platforms could provide valuable insights into its broader applicability and potential for improving the synergy between existing and future Earth Observation systems.
Conceptualization, P.-L.C., E.B., N.L.J.C., J.B.-S. and A.M.; methodology, P.-L.C. and E.B.; software, P.-L.C. and E.B.; validation, P.-L.C. and E.B.; formal analysis, P.-L.C.; investigation, P.-L.C., E.B. and N.L.J.C.; resources, N.L.J.C. and J.B.-S.; data curation, P.-L.C. and E.B.; writing—original draft preparation, P.-L.C. and E.B.; writing—review and editing, P.-L.C., E.B., N.L.J.C., J.B.-S. and A.M.; visualization, P.-L.C. and E.B.; supervision, J.B.-S. and N.L.J.C.; project administration, J.B.-S., N.L.J.C. and A.M.; funding acquisition, J.B.-S., N.L.J.C. and A.M. All authors have read and agreed to the published version of the manuscript.
The data presented in this study were derived from the following resources available in the public domain:
This study has been carried out in the framework of the Agence Nationale de la Recherche’s LabCom INCLASS (ANR-19-LCV2-0009), a joint laboratory between ACRI-ST and the Institut d’Astrophysique Spatiale (IAS). We thank the reviewers for their constructive feedback that helped improve the manuscript.
The authors declare no conflicts of interest.
The following abbreviations are used in this manuscript:
AI | Artificial Intelligence |
CCD | Charge-Coupled Device |
CEOS | Committee on Earth Observation Satellites |
DFT | Discrete Fourier Transform |
EnMAP | Environmental Mapping and Analysis Program |
EO | Earth Observation |
ERGAS | Error Relative Global Absolute Scale |
GSD | Ground Sampling Distance |
GT | Ground Truth |
HrHSI | High-resolution Hyperspectral Image |
HrMSI | High-resolution Multispectral Image |
HS | hyperspectral |
LMM | Linear Mixing Model |
LrHSI | Low-resolution Hyperspectral Image |
MS | Multi-Spectral |
MSI | Multi-Spectral Instrument |
MSI | Multispectral Image |
NDVI | Normalized Difference Vegetation Index |
NN | Neural Network |
OLCI | Ocean and Land Color Instrument |
PICS | Pseudo-Invariant Calibration Sites |
PSNR | Peak Signal-to-Noise Ratio |
RGB | Red-Green-Blue |
RMSE | Root Mean Square Error |
RNN | Recurrent Neural Network |
S2 | Sentinel-2 |
S3 | Sentinel-3 |
SAM | Spectral Angle Mapper |
SLSTR | Sea and Land Surface Temperature Radiometer |
SNR | Signal-to-Noise Ratio |
SRF | Spectral Response Function |
SSI | Structural Similarity Index |
ViT | Vision Transformer |
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Training metrics’ start and end values, 150 epochs. Arrows next to the metric indicate whether the metric needs to decrease (↓) or increase (↑).
Metric | Epoch 1 | Epoch 150 |
---|---|---|
MSE train (↓) | 0.036 | 0.003 |
MSE val (↓) | 0.032 | 0.003 |
RMSE (↓) | 0.053 | 0.006 |
ERGAS (↓) | 1.135 | 0.129 |
SAM (↓) | 13.976 | 1.152 |
PSNR (↑) | 22.134 | 43.456 |
Single band fusion | [Image omitted. Please see PDF.] | [Image omitted. Please see PDF.] |
Band 18 | Band 18 |
Inference metrics calculated for the fused products shown in
Metric | EO | CAVE |
---|---|---|
J-S Divergence (↓) | 0.032 | 0.065 |
SAM (↓) | 0.013 | 0.085 |
SSI (↑) | 0.988 | 0.988 |
Inference metrics calculated for the fused products shown in
Metric | EO | CAVE |
---|---|---|
J-S Divergence (↓) | 0.051 | 0.502 |
SAM (↓) | 0.020 | 0.278 |
SSI (↑) | 0.900 | 0.967 |
Metrics for the urban scene (Los Angeles) fusion shown in
Metric | EO | CAVE |
---|---|---|
J-S Divergence (↓) | 0.095 | 0.102 |
SAM (↓) | 0.079 | 0.065 |
SSI (↑) | 0.989 | 0.953 |
IS (↑) | 1.245 | 1.264 |
Fused product metrics (France). The best results are underlined. Metrics that need to be maximized are noted (↑), best is 1. Metrics that need to be minimized are noted (↓), best is 0. In the case of the IS, no finite upper limit is defined.
Metric | EO | CAVE |
---|---|---|
J-S Divergence (↓) | 0.123 | 0.321 |
SAM (↓) | 0.011 | 0.136 |
SSI (↑) | 0.961 | 0.512 |
IS (↑) | 3.589 | 3.377 |
Metrics calculated on the interpolated fusion and the Sentinel-3 image (CEOS Algeria). Metrics that need to be maximized are noted (↑), best is 1. Metrics that need to be minimized are noted (↓), best is 0.
Metric | Value |
---|---|
RMSE (↓) | 0.011 |
Euclidean distance (↓) | 3.403 |
Cosine similarity (↑) | 0.999 |
Metrics calculated on the interpolated fusion and the Sentinel-3 image (CEOS Mauritania). Metrics that need to be maximized are noted (↑), best is 1. Metrics that need to be minimized are noted (↓), best is 0.
Metric | Value |
---|---|
RMSE (↓) | 0.01430 |
Euclidean distance (↓) | 4.239 |
Cosine similarity (↑) | 0.999 |
Metrics calculated on the interpolated fusion and the Sentinel-3 image (Los Angeles). Metrics that need to be maximized are noted (↑), best is 1. Metrics that need to be minimized are noted (↓), best is 0.
Metric | Value |
---|---|
RMSE (↓) | 0.016 |
Euclidean distance (↓) | 2.463 |
Cosine similarity (↑) | 0.996 |
Appendix A. Data Preparation
Appendix A.1. Number of EnMAP Requested Images per Continent
-
Asia: 30 images
-
Europe: 20 images
-
North Africa: 16 images
-
South Africa: 19 images
-
North America: 30 images
-
South America: 22 images
-
Oceania: 22 images
Appendix B. Performance Metrics
We use the following widely used metrics during training to assert the network’s convergence. In data fusion and spectral unmixing tasks, the literature abounds with definitions ([
Peak Signal-to-Noise Ratio (PSNR) PSNR evaluates the fidelity of a reconstructed signal compared to its original version by calculating the ratio of the maximum signal power to the power of corrupting noise. Higher PSNR values signify a closer match between the original and reconstructed signals, indicating better quality and fidelity.
Root Mean Squared Error (RMSE) RMSE provides a measure of the average discrepancy between predicted and observed values. By calculating the square root of the mean squared differences between predicted and observed values, RMSE offers insight into the overall accuracy of a model or estimation method. Lower RMSE values indicate a smaller average error and better agreement between predicted and observed values.
Erreur Relative Globale Adimensionnelle de Synthèse (ERGAS) ERGAS is specifically designed to assess the performance of image fusion techniques, particularly in remote sensing applications. It quantifies the relative error in global accuracy between the original and fused images, taking into account both spectral and spatial distortions. ERGAS is expressed as a percentage, with lower values indicating higher fusion quality and better preservation of image details.
Spectral Angle Mapper (SAM) SAM measures the spectral similarity between two spectral vectors by calculating the angle between them in a high-dimensional space representing spectral reflectance values. This metric is commonly employed in remote sensing tasks such as image classification and spectral unmixing to quantify the degree of similarity between spectral signatures. Lower SAM values indicate a higher degree of spectral similarity between the compared spectra, suggesting a closer match in spectral characteristics.
Appendix C. Inference Metrics
The Structural Similarity Index (SSI) measures the similarity between two images in terms of luminance, contrast, and structure, providing a comprehensive assessment of image quality (between S2 and the fused result). The Jensen–Shannon Distance (JSD) quantifies the similarity between two image distributions, providing a metric for spectral distribution (between S3 and fused result). The SAM, described before, is also used to calculate the distance between the predicted mean spectrum and the Sentinel-3 mean spectrum. The Inception Score evaluates the quality and diversity of generated images by measuring the confidence of a pre-trained neural network in classifying the images and their diversity based on class probabilities (only on the fused product).
Appendix C.1. Structural Similarity Index
The Structural Similarity Index (SSI) involves several formulas for comparing luminance, contrast, and structure. It was originally designed to measure digital image and video quality ([
Let us take x and y, our input images to compare, L, the pixel’s dynamic, 255 for example, for 8-bit images, and
The general formula for calculating a panchromatic image (
In this formula,
is the pixel value at coordinates in the panchromatic image. is the pixel value of channel i at coordinates in the n-channel image. is the weight assigned to channel i. n is the total number of channels.
For a simple and equal-weighted approach, you can set
In the following, for notation simplicity, x and y will be defined by their panchromatic form (
-
Luminance Comparison ( l ):
-
the mean value of x; -
the mean value of y; -
for division stability when the denominator is small.
-
Contrast Comparison (c):
the variance of x; the variance of y; for division stability when the denominator is small.
Structure Comparison (s):
the x and y covariance; for division stability when the denominator is small.
All of those constituents give us our overall SSI (we can naturally derive the Structural Dissimilarity Index from the SSI (
Appendix C.2. Jensen–Shannon Distance
The Jensen–Shannon Distance (JSD) is a measure of similarity between two distributions. It is derived from the Kullback–Leibler Divergence (KLD), a measure of how one probability distribution diverges from a second, embodying the expected probability distribution ([
This is where the Jensen–Shannon Distance is needed. Derived from the KLD, it requires several steps:
Calculate the average distribution M:
Calculate the Kullback–Leibler Divergence between P and M:
Calculate the Kullback–Leibler Divergence between Q and M:
Calculate the Jensen–Shannon Distance:
Appendix C.3. Inception Score
The Inception Score (IS) is a metric commonly employed in the evaluation of generative models, particularly in the context of Generative Adversarial Networks (GANs). It aims to assess the quality and diversity of generated images.
Barratt et al. [
The metric leverages the outputs of a classifier model. The underlying idea is to utilize the model’s predictions on generated images to quantify their quality and diversity.
-
: This represents the Inception Score for the generator G. It quantifies the quality and diversity of generated images. -
: The exponential function. It is applied to the expected Kullback–Leibler divergence to emphasize significant differences. -
: The expectation operator, indicating that we are taking the average over samples x generated by the generator G. -
: The KL divergence between the conditional distribution and the marginal distribution . It measures the difference between the predicted class distribution given an image x and the overall class distribution.
With
Hence,
In the current state of the art, the classifier model is usually an InceptionV3 trained on datasets of various images. For our needs, we modified this model to make it fit better the problem. We trained from scratch a Resnet101 on Earth Observation images taken by Sentinel-2 using the EuroSat dataset [
This Resnet will be trained on pertinent data coming from the same satellite used for image fusion, and the IS calculation will be performed using the corresponding contextual classification.
When inferring with a ground truth available (c.f.
Metrics used to determine the distance between Sentinel-3 and the degraded fusion.
Metric | Formula |
---|---|
RMSE | |
Euclidean distance | |
Cosine similarity | |
Appendix D. Gaussian Kernel and Bicubic Interpolation
The Gaussian Kernel we used in this study is defined as:
The kernel is used as a sliding window to convolve the input prediction, giving us our
The interpolation used in this study is bicubic and is defined by
Appendix E. NDVI
Appendix F. GSD 10 EO-Trained Network Fusions
Fused products metrics over Greece. Metrics that need to be maximized are noted (↑), best is 1. Metrics that need to be minimized are noted (↓), best is 0. In the case of the IS, no finite upper limit is defined.
Metric | EO-Trained Fusion |
---|---|
J-S Divergence (↓) | 0.051 |
SAM (↓) | 0.056 |
SSI (↑) | 0.956 |
IS (↑) | 1.170 |
Fused products metrics over Algeria. Metrics that need to be maximized are noted (↑), best is 1. Metrics that need to be minimized are noted (↓), best is 0. In the case of the IS, no finite upper limit is defined.
Metric | EO-Trained Fusion |
---|---|
J-S Divergence (↓) | 0.078 |
SAM (↓) | 0.015 |
SSI (↑) | 0.993 |
IS (↑) | 1.776 |
Fused products metrics over Amazonia. Metrics that need to be maximized are noted (↑), best is 1. Metrics that need to be minimized are noted (↓), best is 0. In the case of the IS, no finite upper limit is defined.
Metric | EO-Trained Fusion |
---|---|
J-S Divergence (↓) | 0.096 |
SAM (↓) | 0.011 |
SSI (↑) | 0.980 |
IS (↑) | 1.561 |
Fused products metrics over Angola. Metrics that need to be maximized are noted (↑), best is 1. Metrics that need to be minimized are noted (↓), best is 0. In the case of the IS, no finite upper limit is defined.
Metric | EO-Trained Fusion |
---|---|
J-S Divergence (↓) | 0.071 |
SAM (↓) | 0.013 |
SSI (↑) | 0.969 |
IS (↑) | 2.328 |
Fused products metrics over Australia. Metrics that need to be maximized are noted (↑), best is 1. Metrics that need to be minimized are noted (↓), best is 0. In the case of the IS, no finite upper limit is defined.
Metric | EO-Trained Fusion |
---|---|
J-S Divergence (↓) | 0.073 |
SAM (↓) | 0.017 |
SSI (↑) | 0.986 |
IS (↑) | 5.015 |
Fused product metrics over Botswana. Metrics that need to be maximized are noted (↑), best is 1. Metrics that need to be minimized are noted (↓), best is 0. In the case of the IS, no finite upper limit is defined.
Metric | EO-Trained Fusion |
---|---|
J-S Divergence (↓) | 0.078 |
SAM (↓) | 0.027 |
SSI (↑) | 0.993 |
IS (↑) | 1.776 |
Fused products metrics over Peru. Metrics that need to be maximized are noted (↑), best is 1. Metrics that need to be minimized are noted (↓), best is 0. In the case of the IS, no finite upper limit is defined.
Metric | EO-Trained Fusion |
---|---|
J-S Divergence (↓) | 0.077 |
SAM (↓) | 0.034 |
SSI (↑) | 0.960 |
IS (↑) | 4.550 |
References
1. Du, Y.; Song, W.; He, Q.; Huang, D.; Liotta, A.; Su, C. Deep learning with multi-scale feature fusion in remote sensing for automatic oceanic eddy detection. Inf. Fusion; 2019; 49, pp. 89-99. [DOI: https://dx.doi.org/10.1016/j.inffus.2018.09.006]
2. Nguyen, H.; Cressie, N.; Braverman, A. Spatial statistical data fusion for remote sensing applications. J. Am. Stat. Assoc.; 2012; 107, pp. 1004-1018. [DOI: https://dx.doi.org/10.1080/01621459.2012.694717]
3. Chang, S.; Deng, Y.; Zhang, Y.; Zhao, Q.; Wang, R.; Zhang, K. An advanced scheme for range ambiguity suppression of spaceborne SAR based on blind source separation. IEEE Trans. Geosci. Remote Sens.; 2022; 60, pp. 1-12. [DOI: https://dx.doi.org/10.1109/TGRS.2022.3184709]
4. Jutz, S.; Milagro-Perez, M. Copernicus: The European Earth Observation programme. Rev. De Teledetección; 2020; 56, pp. V-XI. [DOI: https://dx.doi.org/10.4995/raet.2020.14346]
5. Gomez, C.; Viscarra Rossel, R.A.; McBratney, A.B. Soil organic carbon prediction by hyperspectral remote sensing and field vis-NIR spectroscopy: An Australian case study. Geoderma; 2008; 146, pp. 403-411. [DOI: https://dx.doi.org/10.1016/j.geoderma.2008.06.011]
6. Carter, G.A.; Lucas, K.L.; Blossom, G.A.; Lassitter, C.L.; Holiday, D.M.; Mooneyhan, D.S.; Fastring, D.R.; Holcombe, T.R.; Griffith, J.A. Remote Sensing and Mapping of Tamarisk along the Colorado River, USA: A Comparative Use of Summer-Acquired Hyperion, Thematic Mapper and QuickBird Data. Remote Sens.; 2009; 1, pp. 318-329. [DOI: https://dx.doi.org/10.3390/rs1030318]
7. Okujeni, A.; van der Linden, S.; Hostert, P. Extending the vegetation–impervious–soil model using simulated EnMAP data and machine learning. Remote Sens. Environ.; 2015; 158, pp. 69-80. [DOI: https://dx.doi.org/10.1016/j.rse.2014.11.009]
8. Fernández, J.; Fernández, C.; Féménias, P.; Peter, H. The copernicus sentinel-3 mission. Proceedings of the ILRS Workshop; Annapolis, MD, USA, 10 October 2016; pp. 1-4.
9. Aziz, M.A.; Haldar, D.; Danodia, A.; Chauhan, P. Use of time series Sentinel-1 and Sentinel-2 image for rice crop inventory in parts of Bangladesh. Appl. Geomat.; 2023; 15, pp. 407-420. [DOI: https://dx.doi.org/10.1007/s12518-023-00501-2]
10. Grabska, E.; Hostert, P.; Pflugmacher, D.; Ostapowicz, K. Forest stand species mapping using the Sentinel-2 time series. Remote Sens.; 2019; 11, 1197. [DOI: https://dx.doi.org/10.3390/rs11101197]
11. Malenovský, Z.; Rott, H.; Cihlar, J.; Schaepman, M.E.; García-Santos, G.; Fernandes, R.; Berger, M. Sentinels for science: Potential of Sentinel-1, -2, and -3 missions for scientific observations of ocean, cryosphere, and land. Remote Sens. Environ.; 2012; 120, pp. 91-101. [DOI: https://dx.doi.org/10.1016/j.rse.2011.09.026]
12. Toming, K.; Kutser, T.; Uiboupin, R.; Arikas, A.; Vahter, K.; Paavel, B. Mapping water quality parameters with sentinel-3 ocean and land colour instrument imagery in the Baltic Sea. Remote Sens.; 2017; 9, 1070. [DOI: https://dx.doi.org/10.3390/rs9101070]
13. Chen, C.; Dubovik, O.; Litvinov, P.; Fuertes, D.; Lopatin, A.; Lapyonok, T.; Matar, C.; Karol, Y.; Fischer, J.; Preusker, R. et al. Properties of aerosol and surface derived from OLCI/Sentinel-3A using GRASP approach: Retrieval development and preliminary validation. Remote Sens. Environ.; 2022; 280, 113142. [DOI: https://dx.doi.org/10.1016/j.rse.2022.113142]
14. Tarasiewicz, T.; Nalepa, J.; Farrugia, R.A.; Valentino, G.; Chen, M.; Briffa, J.A.; Kawulok, M. Multitemporal and multispectral data fusion for super-resolution of Sentinel-2 images. IEEE Trans. Geosci. Remote Sens.; 2023; 61, pp. 1-19. [DOI: https://dx.doi.org/10.1109/TGRS.2023.3311622]
15. Vivone, G.; Restaino, R.; Licciardi, G.; Dalla Mura, M.; Chanussot, J. Multiresolution analysis and component substitution techniques for hyperspectral pansharpening. Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium; Quebec City, QC, Canada, 13–18 July 2014; pp. 2649-2652.
16. Loncan, L.; De Almeida, L.B.; Bioucas-Dias, J.M.; Briottet, X.; Chanussot, J.; Dobigeon, N.; Fabre, S.; Liao, W.; Licciardi, G.A.; Simoes, M. et al. Hyperspectral pansharpening: A review. IEEE Geosci. Remote Sens. Mag.; 2015; 3, pp. 27-46. [DOI: https://dx.doi.org/10.1109/MGRS.2015.2440094]
17. Selva, M.; Aiazzi, B.; Butera, F.; Chiarantini, L.; Baronti, S. Hyper-sharpening: A first approach on SIM-GA data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.; 2015; 8, pp. 3008-3024. [DOI: https://dx.doi.org/10.1109/JSTARS.2015.2440092]
18. Wei, Q.; Dobigeon, N.; Tourneret, J.Y. Bayesian fusion of multi-band images. IEEE J. Sel. Top. Signal Process.; 2015; 9, pp. 1117-1127. [DOI: https://dx.doi.org/10.1109/JSTSP.2015.2407855]
19. Zhang, Y.; De Backer, S.; Scheunders, P. Noise-resistant wavelet-based Bayesian fusion of multispectral and hyperspectral images. IEEE Trans. Geosci. Remote Sens.; 2009; 47, pp. 3834-3843. [DOI: https://dx.doi.org/10.1109/TGRS.2009.2017737]
20. Zhang, Y.; De Backer, S.; Scheunders, P. Bayesian fusion of multispectral and hyperspectral image in wavelet domain. Proceedings of the IGARSS 2008–2008 IEEE International Geoscience and Remote Sensing Symposium; Boston, MA, USA, 7–11 July 2008; Volume 5, V-69.
21. Dong, W.; Fu, F.; Shi, G.; Cao, X.; Wu, J.; Li, G.; Li, X. Hyperspectral image super-resolution via non-negative structured sparse representation. IEEE Trans. Image Process.; 2016; 25, pp. 2337-2352. [DOI: https://dx.doi.org/10.1109/TIP.2016.2542360] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27019486]
22. Wei, Q.; Bioucas-Dias, J.; Dobigeon, N.; Tourneret, J.Y.; Chen, M.; Godsill, S. Multiband image fusion based on spectral unmixing. IEEE Trans. Geosci. Remote Sens.; 2016; 54, pp. 7236-7249. [DOI: https://dx.doi.org/10.1109/TGRS.2016.2598784]
23. Zhang, K.; Wang, M.; Yang, S.; Jiao, L. Spatial–spectral-graph-regularized low-rank tensor decomposition for multispectral and hyperspectral image fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.; 2018; 11, pp. 1030-1040. [DOI: https://dx.doi.org/10.1109/JSTARS.2017.2785411]
24. Vivone, G. Multispectral and hyperspectral image fusion in remote sensing: A survey. Inf. Fusion; 2023; 89, pp. 405-417. [DOI: https://dx.doi.org/10.1016/j.inffus.2022.08.032]
25. Zhang, H.; Xu, H.; Tian, X.; Jiang, J.; Ma, J. Image fusion meets deep learning: A survey and perspective. Inf. Fusion; 2021; 76, pp. 323-336. [DOI: https://dx.doi.org/10.1016/j.inffus.2021.06.008]
26. Uezato, T.; Hong, D.; Yokoya, N.; He, W. Guided deep decoder: Unsupervised image pair fusion. Computer Vision, Proceedings of the ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part VI Springer: Cham, Switzerland, 2020; pp. 87-102.
27. Sara, D.; Mandava, A.K.; Kumar, A.; Duela, S.; Jude, A. Hyperspectral and multispectral image fusion techniques for high resolution applications: A review. Earth Sci. Inform.; 2021; 14, pp. 1685-1705. [DOI: https://dx.doi.org/10.1007/s12145-021-00621-6]
28. Yokoya, N.; Grohnfeldt, C.; Chanussot, J. Hyperspectral and multispectral data fusion: A comparative review of the recent literature. IEEE Geosci. Remote Sens. Mag.; 2017; 5, pp. 29-56. [DOI: https://dx.doi.org/10.1109/MGRS.2016.2637824]
29. Guzinski, R.; Nieto, H. Evaluating the feasibility of using Sentinel-2 and Sentinel-3 satellites for high-resolution evapotranspiration estimations. Remote Sens. Environ.; 2019; 221, pp. 157-172. [DOI: https://dx.doi.org/10.1016/j.rse.2018.11.019]
30. Lin, C.; Zhu, A.X.; Wang, Z.; Wang, X.; Ma, R. The refined spatiotemporal representation of soil organic matter based on remote images fusion of Sentinel-2 and Sentinel-3. Int. J. Appl. Earth Obs. Geoinf.; 2020; 89, 102094. [DOI: https://dx.doi.org/10.1016/j.jag.2020.102094]
31. Fernandez, R.; Fernandez-Beltran, R.; Kang, J.; Pla, F. Sentinel-3 super-resolution based on dense multireceptive channel attention. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.; 2021; 14, pp. 7359-7372. [DOI: https://dx.doi.org/10.1109/JSTARS.2021.3097410]
32. Sobrino, J.A.; Irakulis, I. A methodology for comparing the surface urban heat island in selected urban agglomerations around the world from Sentinel-3 SLSTR data. Remote Sens.; 2020; 12, 2052. [DOI: https://dx.doi.org/10.3390/rs12122052]
33. Dian, R.; Li, S.; Sun, B.; Guo, A. Recent advances and new guidelines on hyperspectral and multispectral image fusion. Inf. Fusion; 2021; 69, pp. 40-51. [DOI: https://dx.doi.org/10.1016/j.inffus.2020.11.001]
34. Wang, J.; Shao, Z.; Huang, X.; Lu, T.; Zhang, R.; Ma, J. Pan-sharpening via high-pass modification convolutional neural network. Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP); Anchorage, AK, USA, 19–22 September 2021; pp. 1714-1718.
35. Iordache, M.D.; Bioucas-Dias, J.; Plaza, A. Total Variation Spatial Regularization for Sparse Hyperspectral Unmixing. IEEE Trans. Geosci. Remote Sens.; 2012; 50, pp. 4484-4502. [DOI: https://dx.doi.org/10.1109/TGRS.2012.2191590]
36. Rojas, R.; Rojas, R. The backpropagation algorithm. Neural Networks: A Systematic Introduction; Springer: Berlin/Heidelberg, Germany, 1996; pp. 149-182.
37. Jia, S.; Qian, Y. Spectral and spatial complexity-based hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens.; 2007; 45, pp. 3867-3879.
38. Zhang, X.; Huang, W.; Wang, Q.; Li, X. SSR-NET: Spatial–spectral reconstruction network for hyperspectral and multispectral image fusion. IEEE Trans. Geosci. Remote Sens.; 2020; 59, pp. 5953-5965. [DOI: https://dx.doi.org/10.1109/TGRS.2020.3018732]
39. Chakrabarti, A.; Zickler, T. Statistics of real-world hyperspectral images. Proceedings of the CVPR 2011; Colorado Springs, CO, USA, 20–25 June 2011; pp. 193-200.
40. Yasuma, F.; Mitsunaga, T.; Iso, D.; Nayar, S. Generalized Assorted Pixel Camera: Post-Capture Control of Resolution, Dynamic Range and Spectrum; Technical Report IEEE: New York, NY, USA, 2008.
41. de Los Reyes, R.; Langheinrich, M.; Alonso, K.; Bachmann, M.; Carmona, E.; Gerasch, B.; Holzwarth, S.; Marshall, D.; Müller, R.; Pato, M. et al. Atmospheric Correction of DESIS and EnMAP Hyperspectral Data: Validation of L2a Products. Proceedings of the IGARSS 2023–2023 IEEE International Geoscience and Remote Sensing Symposium; Pasadena, CA, USA, 16–21 July 2023; pp. 1034-1037.
42. Dong, W.; Zhou, C.; Wu, F.; Wu, J.; Shi, G.; Li, X. Model-guided deep hyperspectral image super-resolution. IEEE Trans. Image Process.; 2021; 30, pp. 5754-5768. [DOI: https://dx.doi.org/10.1109/TIP.2021.3078058] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33979283]
43. Ilesanmi, A.E.; Ilesanmi, T.O. Methods for image denoising using convolutional neural network: A review. Complex Intell. Syst.; 2021; 7, pp. 2179-2198. [DOI: https://dx.doi.org/10.1007/s40747-021-00428-4]
44. Chabrillat, S.; Guanter, L.; Kaufmann, H.; Förster, S.; Beamish, A.; Brosinsky, A.; Wulf, H.; Asadzadeh, S.; Bochow, M.; Bohn, N. et al. EnMAP Science Plan; GFZ Data Services: Potsdam, Germany, 2022.
45. Chander, G.; Mishra, N.; Helder, D.L.; Aaron, D.B.; Angal, A.; Choi, T.; Xiong, X.; Doelling, D.R. Applications of spectral band adjustment factors (SBAF) for cross-calibration. IEEE Trans. Geosci. Remote Sens.; 2012; 51, pp. 1267-1281. [DOI: https://dx.doi.org/10.1109/TGRS.2012.2228007]
46. Ran, R.; Deng, L.J.; Jiang, T.X.; Hu, J.F.; Chanussot, J.; Vivone, G. GuidedNet: A general CNN fusion framework via high-resolution guidance for hyperspectral image super-resolution. IEEE Trans. Cybern.; 2023; 53, pp. 4148-4161. [DOI: https://dx.doi.org/10.1109/TCYB.2023.3238200] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37022388]
47. Dong, C.; Loy, C.C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell.; 2015; 38, pp. 295-307. [DOI: https://dx.doi.org/10.1109/TPAMI.2015.2439281] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26761735]
48. Dian, R.; Li, S.; Kang, X. Regularizing hyperspectral and multispectral image fusion by CNN denoiser. IEEE Trans. Neural Netw. Learn. Syst.; 2020; 32, pp. 1124-1135. [DOI: https://dx.doi.org/10.1109/TNNLS.2020.2980398] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32310788]
49. Qu, Y.; Qi, H.; Kwan, C. Unsupervised sparse dirichlet-net for hyperspectral image super-resolution. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Salt Lake City, UT, USA, 18–23 June 2018; pp. 2511-2520.
50. Xie, Q.; Zhou, M.; Zhao, Q.; Xu, Z.; Meng, D. MHF-Net: An interpretable deep network for multispectral and hyperspectral image fusion. IEEE Trans. Pattern Anal. Mach. Intell.; 2020; 44, pp. 1457-1473. [DOI: https://dx.doi.org/10.1109/TPAMI.2020.3015691]
51. Yao, J.; Hong, D.; Chanussot, J.; Meng, D.; Zhu, X.; Xu, Z. Cross-attention in coupled unmixing nets for unsupervised hyperspectral super-resolution. Computer Vision, Proceedings of the ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XXIX 16 Springer: Cham, Switzerland, 2020; pp. 208-224.
52. Hu, J.F.; Huang, T.Z.; Deng, L.J.; Dou, H.X.; Hong, D.; Vivone, G. Fusformer: A transformer-based fusion network for hyperspectral image super-resolution. IEEE Geosci. Remote Sens. Lett.; 2022; 19, pp. 1-5. [DOI: https://dx.doi.org/10.1109/LGRS.2022.3194257]
53. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all You Need—Advances in Neural Information Processing Systems. Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017); Long Beach, CA, USA, 4–9 December 2017.
54. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv; 2020; arXiv: 2010.11929
55. Cosnefroy, H.; Briottet, X.; Leroy, M.; Lecomte, P.; Santer, R. A field experiment in Saharan Algeria for the calibration of optical satellite sensors. Int. J. Remote Sens.; 1997; 18, pp. 3337-3359. [DOI: https://dx.doi.org/10.1080/014311697216919]
56. Lacherade, S.; Fougnie, B.; Henry, P.; Gamet, P. Cross calibration over desert sites: Description, methodology, and operational implementation. IEEE Trans. Geosci. Remote Sens.; 2013; 51, pp. 1098-1113. [DOI: https://dx.doi.org/10.1109/TGRS.2012.2227061]
57. Cosnefroy, H.; Leroy, M.; Briottet, X. Selection and characterization of Saharan and Arabian desert sites for the calibration of optical satellite sensors. Remote Sens. Environ.; 1996; 58, pp. 101-114. [DOI: https://dx.doi.org/10.1016/0034-4257(95)00211-1]
58. Myneni, R.B.; Hall, F.G.; Sellers, P.J.; Marshak, A.L. The interpretation of spectral vegetation indexes. IEEE Trans. Geosci. Remote Sens.; 1995; 33, pp. 481-486. [DOI: https://dx.doi.org/10.1109/TGRS.1995.8746029]
59. Pettorelli, N.; Vik, J.O.; Mysterud, A.; Gaillard, J.M.; Tucker, C.J.; Stenseth, N.C. Using the satellite-derived NDVI to assess ecological responses to environmental change. Trends Ecol. Evol.; 2005; 20, pp. 503-510. [DOI: https://dx.doi.org/10.1016/j.tree.2005.05.011]
60. Robinson, C.; Hou, L.; Malkin, K.; Soobitsky, R.; Czawlytko, J.; Dilkina, B.; Jojic, N. Large Scale High-Resolution Land Cover Mapping with Multi-Resolution Data. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Long Beach, CA, USA, 15–20 June 2019; pp. 12726-12735.
61. Costa, L.d.F. Further generalizations of the Jaccard index. arXiv; 2021; arXiv: 2110.09619
62. Kruse, F.A.; Lefkoff, A.; Boardman, J.; Heidebrecht, K.; Shapiro, A.; Barloon, P.; Goetz, A. The spectral image processing system (SIPS)—interactive visualization and analysis of imaging spectrometer data. Remote Sens. Environ.; 1993; 44, pp. 145-163. [DOI: https://dx.doi.org/10.1016/0034-4257(93)90013-N]
63. Brunet, D.; Vrscay, E.R.; Wang, Z. On the mathematical properties of the structural similarity index. IEEE Trans. Image Process.; 2011; 21, pp. 1488-1499. [DOI: https://dx.doi.org/10.1109/TIP.2011.2173206] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/22042163]
64. Endres, D.M.; Schindelin, J.E. A new metric for probability distributions. IEEE Trans. Inf. Theory; 2003; 49, pp. 1858-1860. [DOI: https://dx.doi.org/10.1109/TIT.2003.813506]
65. Fuglede, B.; Topsoe, F. Jensen–Shannon divergence and Hilbert space embedding. Proceedings of the International Symposium onInformation Theory, 2004, ISIT 2004. Proceedings; Chicago, IL, USA, 27 June–2 July 2004; 31.
66. Barratt, S.; Sharma, R. A note on the inception score. arXiv; 2018; arXiv: 1801.01973
67. Helber, P.; Bischke, B.; Dengel, A.; Borth, D. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.; 2019; 12, pp. 2217-2226. [DOI: https://dx.doi.org/10.1109/JSTARS.2019.2918242]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
With the increasing number of ongoing space missions for Earth Observation (EO), there is a need to enhance data products by combining observations from various remote sensing instruments. We introduce a new Transformer-based approach for data fusion, achieving up to a 10- to-30-fold increase in the spatial resolution of our hyperspectral data. We trained the network on a synthetic set of Sentinel-2 (S2) and Sentinel-3 (S3) images, simulated from the hyperspectral mission EnMAP (30 m resolution), leading to a fused product of 21 bands at a 30 m ground resolution. The performances were calculated by fusing original S2 (12 bands, 10, 20, and 60 m resolutions) and S3 (21 bands, 300 m resolution) images. To go beyond EnMap’s ground resolution, the network was also trained using a generic set of non-EO images from the CAVE dataset. However, we found that training the network on contextually relevant data is crucial. The EO-trained network significantly outperformed the non-EO-trained one. Finally, we observed that the original network, trained at 30 m ground resolution, performed well when fed images at 10 m ground resolution, likely due to the flexibility of Transformer-based networks.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details





1 ACRI-ST, Centre d’Etudes et de Recherche de Grasse (CERGA), 10 Av. Nicolas Copernic, 06130 Grasse, France