Full Text
1. Introduction
Currently, panchromatic images with high spatial resolution are widely used in applications that require detailed spatial information. However, these images lack color information and are therefore unsuitable for resource monitoring and agricultural applications. Multispectral images, on the other hand, can provide spectral information of surface objects, but their spatial resolution is usually not high enough, which limits their practical application. Practical applications require not only high spatial resolution images but also color information. However, current sensor technology constraints make it impossible to directly obtain remote sensing images with both high spatial resolution and color information [1]. Therefore, multispectral and panchromatic image fusion is a widely used technology in remote sensing at present. It combines the spatial details of high-resolution panchromatic images with the spectral information of low-resolution multispectral images to produce high-resolution multispectral images [2,3]. The fusion process aims to combine the benefits of both types of images to improve the spatial resolution of multispectral images while minimizing loss of spectral information [4].
Image fusion is typically categorized into four levels, namely pixel level, feature level, confidence level, and decision level [5]. Specifically, pixel-level fusion involves the direct fusing of individual pixels from the original remote sensing image. It has the broadest application range and the richest research results and is also one of the current research hotspots. Fusion of multispectral images with panchromatic images is also termed pan-sharpening [6]. Many methods of pan-sharpening have been proposed over the years. In general, pan-sharpening methods can be classified into traditional component substitution methods and multi-resolution analysis methods, as well as newer model-based (MB) and deep neural network-based methods.
With the rapid development of high-resolution satellites, many remote sensing applications require both high spatial and spectral resolution. Therefore, the two must be fused into images that have both high resolution and spectral information. This significantly expands the potential applications of raw remote sensing images. Tang [7] utilized panchromatic and multispectral images from the BJ-2 and GF-2 satellites as data sources. Fusion experiments were conducted using wavelet transform, PCA (Principal Component Analysis), HPF (high-pass filter), GS (Gram–Schmidt), and Brovey fusion methods. Wu et al. [8] conducted experiments on GF-1 and ZY-3 satellite images using the GS transform, PCA transform, and CN transform fusion methods. Xing [9]...
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details

1 College of Resource Environment and Tourism, Capital Normal University, Beijing 100048, China;
2 College of Resource Environment and Tourism, Capital Normal University, Beijing 100048, China;