1. Introduction
Synthetic aperture radar (SAR), a microwave imaging technique that is not sensitive to clouds, can portray detailed information of objects on Earth constantly [1,2,3]. SAR is a popular technology in the fields of ocean monitoring, maritime traffic regulation, military, disaster warning, agriculture, etc. [4,5,6]. With the development of maritime trade, ship detection has been a hotspot for the application of SAR images, as shown in Figure 1. However, the accuracy and robustness of detection algorithms are generally affected by speckle noise and complex backgrounds [7,8]. SAR exhibits strong penetration capabilities and is not influenced by light or weather conditions, making it a valuable tool for various imaging applications. However, certain factors, such as sea clutter, variations in brightness, and sensor noise, must be carefully considered. The roughness of the sea surface, particularly in the presence of waves, generates significant clutter that can obscure smaller or less-reflective targets. Additionally, variations in brightness can hinder detection, especially when identifying targets in complex backgrounds. This is due to the fact that the intensity or brightness of targets in SAR images is influenced by several factors, including surface roughness, the angle of incidence, and the material properties of the target. Furthermore, SAR sensors can introduce various forms of noise, such as thermal noise or system-related imperfections, which compromise the quality of the acquired data. Many studies confirm that enhancing image saliency is crucial to improve both clarity and discrimination of ships, thereby ensuring higher detection accuracy [9,10].
As an important aspect of image processing, image saliency enhancement has been applied broadly in the fields of target detection, classification, image quality improvement, segmentation, etc. [11,12,13,14,15]. Existing methods for enhancing image saliency can be divided into traditional and deep-learning-based methods.
Traditional methods mainly focus on mathematical models and signal processing techniques, such as multi-scale analysis, wavelet transform, morphological filtering, etc., that enhance image saliency based on local features and statistical information. For example, Parisi et al. [16] proposed an image fusion method to excavate multi-scale information to improve the features of remote sensing images. Based upon the Retinex theory, Guo and Sim [17] applied recursive filtering to elevate the quality of images. Chen et al. [18] designed an adaptive local triplet pattern to enlarge local contrast. Heijmans and Roerdink [19] exploited morphological filtering, which belongs to algebra and geometry techniques, to enhance image saliency. But, these methods generally suffer from complex backgrounds and massive noise; therefore, their detection accuracy and false alarm rates are weakened. In addition, the effectiveness of these methods also depends on the quality of hand-crafted features, which are often not reliable in complex scenarios.
With the development of deep learning (DL) techniques [20,21], convolutional neural networks (CNNs), which excel at extracting deep representation, have been the mainstream approach for enhancing image saliency. Simonyan and Zisserman [22] designed the famous VGGNet model to extract very deep features to handle large-scale image recognition, which is the basis for image saliency enhancement. Integrating the inception residual block, Wang et al. [23] built a Retinex decomposition network to enhance low-light images. To improve model training, Goodfellow et al. [24] designed the generative adversarial network (GAN) to acquire more quality samples. On the basis of GAN, Liu and Shen [25] combined salient region detection and attention mechanisms to realize image super-resolution. Although CNNs have shown good performance, it is hard to deploy them in light-weight edge computing devices due to their high complexity. Furthermore, the data transmission rate between satellites and the ground is only hundreds of Mbps, which is slower than the several Gbps acquisition rate of SAR data [26]. Transmitting entire SAR images, which often contain a great deal of useless information, to ground stations will occupy additional bandwidth, thereby affecting real-time ship detection [27,28,29].
The YOLO (You Only Look Once) [30] series of models has garnered significant success in the field of object detection. The primary concept behind YOLO is its ability to perform real-time detection of multiple objects within an image using a single forward pass, significantly enhancing detection efficiency. YOLO achieves this by partitioning the image into grids, with each grid responsible for predicting the object classes and bounding boxes within its respective region. YOLOX [31], an advanced iteration of the original YOLO model, introduces several key innovations. Notably, it adopts an anchor-free design, eliminating the reliance on predefined Anchor Boxes. Additionally, YOLOX separates the classification and regression tasks into two distinct branches, thereby mitigating the interaction between the two tasks that was present in the original YOLO model. The introduction of the Mosaic data augmentation strategy, which involves stitching together four images during training, enhances the model’s generalization capabilities. Furthermore, YOLOX employs the SimOTA strategy to optimize the label assignment process, thereby improving matching efficiency and the overall performance of the object detection task. Currently, YOLO-based models are widely applied in real-time object detection, cross-category high-quality detection, and small object detection. These models have demonstrated excellent performance in various domains, including autonomous driving, healthcare, and video surveillance.
In this article, a light-weight SAR image saliency enhancement method (ISEM) based on sea–land segmentation preference is proposed. As shown in Figure 2, first, the land areas are identified adaptively with the binary image histogram after image denoising. A morphological operation is then applied to connect land regions for an accurate land mask, which reduces the interference of land areas during target detection. Finally, an image saliency enhancement method based on the spectral residual (SR) is designed to efficiently enhance the visual saliency of ships and suppress redundant background information, including sea clutter, noise, and land areas. To verify the effectiveness of the proposed method, it was optimized and embedded into light-weight edge computing platforms. Meanwhile, parallel computing and hardware acceleration were also considered during deployment. Experimental results show that the ISEM not only effectively improves the accuracy, robustness, and speed of ship detection but also significantly reduces the size of data transmission and computational consumption.
2. Image Denoising
Electromagnetic waves emitted by SAR reach objects and are reflected to the receiver. The echo signal in a resolution unit is the sum of many scattered echoes. Speckle noises, which are usually spots with intense grayscale values compared to its neighboring areas, will be caused due to the interference among different scattered echoes. Furthermore, the thermal noise generated by the internal electronic equipment of the radar device while it is working will also reduce the quality of the image. The speckle noise is the main negative factor for image understanding and interpretation. Hence, it is necessary to denoise to ensure the detection accuracy and robustness.
Image filtering can remove noise and improve visual effects. Commonly used methods for SAR image processing include mean filtering [32], median filtering [33], bilateral filtering [34], Lee filtering [35], and Frost filtering [36]. In this article, median filtering is used for denoising to remove the speckle noise of SAR images. It is a nonlinear filter that first sorts the grayscale values of the neighborhood of the central pixel in a window and then takes the middle value as the new grayscale value. Compared with mean filtering, median filtering can remove speckle noise better and retain edge information.
The size of the filter is the key parameter that affects the denoising results. Smaller filters can preserve image details, but the denoising effect may be poor. Larger filters denoise well, but they may blur the image’s structures. After experimental evaluation, this article selects the size of filter. The denoising process for an SAR image can be described as follows:
(1)
where , and are the height and width of the image, the filtering function, and the filter, respectively.3. Sea–Land Segmentation
The complex sea and land regions and the interference of land areas often suppress the detection accuracy of marine targets in SAR images. To handle this problem, a sea–land segmentation algorithm is designed to effectively distinguish sea areas from land areas, thereby providing a clear background for ship detection. The algorithm includes land recognition and land mask generation.
3.1. Land Recognition
The differences in grayscale distributions between land and ocean regions in SAR images are significant. Generally speaking, the grayscale values of land are relatively high, while those of the sea are relatively low. After filtering, the Nobuyuki Otsu (OTSU) algorithm [37] is used to calculate the optimal threshold between the foreground and the background of the filtered image to obtain the binary image . In this way, the positions and shapes of targets can be retained, and the unnecessary details and the background can also be removed. The OTSU algorithm aims to find an optimal threshold , which can maximize the inter-class variance between the foreground and the background as follows:
(2)
Where denotes the inter-class variance. The grayscale value of each pixel of the filtered image will be compared with the optimal threshold . Pixels with larger grayscale values than the threshold will be set to 255, while others will be set to 0. The binary image can be obtained through the following equation:
(3)
Next, according to the proportions of two kinds of pixels in the histogram of the binary image , the land regions on a large scale can be decided. Sea–land segmentation will be conducted if some land regions exist to reduce the interference of land for ship detection. Two examples are shown in Figure 3, including the original images, the binary images, and the histograms of two cases in which there are land regions (first line) and in which there are not (second line).
Let and represent the proportions of the number of pixels with the grayscale values of 255 and 0, respectively.
(4)
where denotes the number of pixels meeting special conditions.The and of the first example, which contains a large area of land regions in Figure 3, are 0.56 and 0.44, separately. On the contrary, those of the second example are 0.99 and 0.01. Thus, the sea and land regions can be segmented based on the absolute difference . As shown, following the formula, when the absolute difference is bigger than the threshold (0.90), sea–land segmentation is needed. Some land recognition results are shown in Figure 4.
(5)
3.2. Land Mask Generation
After confirming that the SAR image contains land, it is necessary to accurately generate a land mask to identify and exclude the land area for target detection. As shown in Figure 4b, most land pixels with high grayscale values were identified as foregrounds by the OTSU algorithm. But, some pixels with low grayscale values were seen as background, which destroyed the entire removal of land regions. To address this issue, this article utilized morphological processing and connected domain labeling to generate a land mask in which the grayscale values of the land are all 255. Finally, the adaptive Sauvola algorithm [38] is applied to binarize the local details to improve the accuracy of the segmentation result further.
Mathematical morphology (MM) refers to algebra, topology, and graph theory used to describe the geometric structures and shapes of targets, which is widely used in the fields of image processing and pattern recognition. As a nonlinear technology, morphological processing defines the structural elements as templates to perform specific operations on binary images to describe and process shapes. It is suitable for structure quantification, contour detection, hole filling, image restoration and reconstruction, etc. Figure 5 shows a structural element , which can be a 2D point set with two grayscale levels of 0 and 255. Its shape, whether it is in the shape of a rectangle, a cross, or an ellipse, has a different influence on the processing results.
In this article, a rectangular structure element with a specific size of is adopted, and the center of rectangle is selected as the reference point. As shown in Figure 4b, the land regions in the binary images contain many foreground pixels with a grayscale value of 255 and some background pixels with a grayscale value of 0.
Then, the closing operation is used to fill the holes composed of background pixels inside of the land regions of the binary image. This can connect the gaps between edges and obtain the relatively smooth contour map of land :
(6)
where “”, “”, and “” indicate the closing, dilation, and erosion operations, respectively. The closing operation is a combination of dilation and erosion operations, which is able to fill the holes and connect contour gaps while keeping the shapes of targets.Furthermore, the 8-adjacent traversal algorithm is utilized to search and mark the connected domains , , where denotes the number of connected regions. Pixels in a connected domain will be assigned the same label. Thus, the land mask can be divided into many independent connected regions for subsequent processing.
Then, the average area , of all connected domains is calculated. Connected domains larger than the average area are considered to be land. On the contrary, connected domains smaller than the average area are considered to be ships or islands similar to ships, and their grayscale values are set to 0. As shown in Figure 6b, the grayscale values of land regions are all 255.
(7)
The filtered image is subtracted by the land mask to realize sea–land segmentation, as follows. The resultant image contains only sea regions and maritime targets, as shown in Figure 6c.
(8)
The land regions in the original image are filled with black pixels after segmentation. Using the global OTSU algorithm for binarization may cause the maritime targets and the ocean background to be foregrounded. Therefore, this article applies the Sauvola algorithm based on the local features to binarize the segmented image. The threshold is determined according to the local average grayscale value and the standard deviation of the neighboring areas of each pixel. After traversing all pixels, the accurate segmentation result is obtained. The definition of threshold is as follows:
(9)
where , , and are the local average grayscale value and the standard deviation in the areas and the dynamic range of the standard deviation, which depends on the quantization bits of the image, respectively. Because the SAR images in this article are 8-bit images, the value of R is 128. A custom correction coefficient with a value from 0 to 1 has little effect on the results.4. SAR ISEM Based on Sea–Land Segmentation Preference
The complexity of background and noise are the key factors that influence the accuracy and robustness of SAR image target detection. This article proposes an SAR image saliency enhancement method based on SR [39]. This method distinguishes prior information from novel information following the image coding theory. Specifically, the prior information in the background is suppressed in the frequency domain, and the novel information of targets is retained. With the saliency map, the target areas are highlighted, and the detection accuracy is improved.
For the binary image without land areas, 2D fast Fourier transform (2-D FFT) is first applied to compute the frequency response . Then, the magnitude spectrum and the phase spectrum of the frequency response are obtained. The logarithmic transformation is used to convert the magnitude spectrum to the logarithmic spectrum, , which is seen as the prior information. This process can be depicted as follows:
(10)
(11)
where indicate the 2D FFT and , , and are the operators used to compute the magnitude, phase, and logarithmic spectra, respectively.Then, a mean filter is designed and convolved with the logarithmic spectrum to obtain a filtered log spectrum containing redundant information. The spectral residual that retains the novel information can be derived by
(12)
(13)
where and denote the convolution operation and a matrix with the size of ( = 5), respectively.Finally, the spectral residual and the phase spectrum are transformed to the spatial domain through 2D inverse fast Fourier transform (2-D IFFT). A Gaussian filter is also introduced to smooth the final saliency map :
(14)
(15)
where denotes the 2D IFFT. The standard deviation and the kernel size of the Gaussian filter are set to 2 and 5, respectively.The saliency map is fused adaptively with the filtered image with different weights. The fused image is normalized and transformed to a range from 0 to 255. The normalized composite image can be obtained by
(16)
(17)
where the weights are set to the optimal values 0.5 and 2, separately. Operators and denote the maximal and minimum grayscale values of the image.To verify the effectiveness of the proposed ISEM based on land–sea segmentation preference, this article applied it to ship detection and adopted the YOLOX-Tiny model for pre-training. Because the model requires the 3-channel input, the channel of the normalized composite image is copied two times to obtain the enhanced image :
(18)
where indicates the operator used to expand the number of channels to 3.5. Light-Weight Model Deployment
The computational resources and bandwidth on the spaceborne SAR are limited. Deploying the light-weight model efficiently on these edge computing platforms is crucial. To this end, light-weight filtering, sea–land segmentation, and saliency enhancement methods were integrated with model pruning, quantization, simplified Fourier transform, optimized Gaussian smoothing, fast connected domain labeling, and simplified normalization. The overall deployment solution is shown in Figure 7.
First, the TensorRT tool was used to convert the model into an efficient format to shorten the inference time. Then, the model was pruned to reduce the number of parameters and the computational complexity. Subsequently, the parameters were converted into 8-bit integers through quantization processing, which reduced the storage and computational complexity and improved the inference speed. To optimize performance, the NVIDIA Jetson Nano GPU and Field-Programmable Gate Array (FPGA) hardware acceleration were integrated. In addition, OpenMP and multi-threading technology of the multi-core CPU were exploited to achieve parallel processing, and the results were merged after the large image was segmented to ensure the consistency of processing speed and the effect.
For the coherent speckle noise of SAR images, a 5 × 5 median filter was considered for denoising and efficiency. Multi-threaded parallel computing was used to accelerate processing. At the same time, a simplified 5 × 5 mean filter was introduced in the SR algorithm to reduce the computational complexity and maintain excellent denoising performance. These light-weight measures significantly improved the efficiency and real-time performance of filtering.
To improve the efficiency of sea–land segmentation in edge computing devices, a series of solutions were adopted. Grayscale images were simplified to reduce computational and storage requirements. The large images were divided into sub-blocks. The 5 × 5 structural elements were adopted for morphological closing operations to reduce the computational burden. Parallel computing was applied to process the blocks during the analysis of connected domains. These strategies have jointly improved the performance and accuracy of segmentation.
To obtain a light-weight saliency enhancement method, an optimized 2D-FFT algorithm was designed to reduce the computational complexity, and parallel segmentation using multi-core processors was used to accelerate the Fourier transform. The size of the Gaussian filter was set to 5 × 5 to accelerate processing without decreasing accuracy. Normalization was also simplified through linear stretching or histogram equalization. An enhanced image with high contrast was generated through the weighted fusion between the filtered image and the saliency map.
This article used the YOLOX-Tiny [31] model to implement ship detection to reduce computational complexity. YOLOX is an anchor-free target detection model based on YOLOv3 [40] made by Megvii Technology in 2021. Compared with the anchor-based model [30,41], it offers faster reasoning speed and more convenient deployment. The complexity of the YOLOX series models, including YOLOX-X, YOLOX-L, YOLOX-M, YOLOX-S, YOLO-Tiny, and YOLOX-Nano, varies. To achieve real-time ship detection, this article selected the light-weight YOLOX-Tiny model in which the parameter volume, floating-point operation volume, and model size are 5.03 M, 9.75 G, and 39.62 MB, respectively.
6. Experiments
6.1. Evaluation Metrics and Datasets
To verify the effectiveness of the proposed ISEM method, this article designed three kinds of experiments, including image denoising, image enhancement based on sea–land segmentation, and light-weight ship detection. To evaluate the denoising performance fairly, three indicators were used.
(1) The peak signal-to-noise ratio (PSNR) aims to measure the similarity between the original image and the filtered image. The unit of the PSNR is a decibel (dB). The higher the value, the lower the distortion of the filtered image. The definition of the PSNR is as follows:
(19)
(20)
where is the max grayscale value of the original image. This value is 255 because an 8-bit image is used in this work. MSE is the mean square error. and are the height and width of the original image, respectively.(2) The structural similarity index measure (SSIM) [42] is also used to measure the similarity between two images. Compared with the PSNR, the SSIM takes into account information like image brightness, contrast, and structure, which is more similar to the perceptual characteristics of the human visual system. The value range of the SSIM is [−1, 1]. The higher the value, the more similar the two images are. The definition of the SSIM is as follows:
(21)
where and are the global average grayscale values of the original image and the filtered image, respectively. is the grayscale covariance of the two images, and are the grayscale variances of the original image and the filtered image, respectively. To prevent the denominator of the SSIM from being 0, and are the constants, with values of 6.5025 and 58.5225, respectively.(3) The edge preservation index (EPI) [43] is used to evaluate the edge preservation ability of the filtered image in the horizontal and vertical directions. The value range of the EPI is [0, 1]. The higher the value, the richer the edge information in the filtered image that is preserved. The definition of the EPI is as follows:
(22)
where and are the right adjacent pixels of the current pixel in the original image and the filtered image, respectively. and are the bottom adjacent pixels of the current pixel in the original image and the filtered image, respectively.To verify the effectiveness of the proposed method for ship detection, the enhanced YOLOX-Tiny algorithm was compared with the CNN-based models, such as Faster R-CNN [44], RetinaNet [45], and SSD 512 [46], and other anchor-free based models, such as YOLOX-Tiny [31], CenterNet [47], and FCOS [48]. The evaluation metrics include precision (P), which is the accuracy of positive class prediction, recall (R), which is the completeness of positive class detection, the F1 score, which is the harmonic mean of P and R, average precision (AP), which is the area under the PR curve to measure the detection performance, and frames per second (FPS), which measures the inference speed. The definitions of these indicators are as follows.
(23)
(24)
(25)
(26)
(27)
where TP is the number of true positive samples, FP is the number of false positive samples, FN is the number of false negative samples, is the precision–recall curve, and N represents the average inference time of the model for each SAR image.The existing datasets for synthetic aperture radar (SAR) ship detection include SSDD [49], LS-SSDD-v1.0 [50], the HRSID (High-Resolution Ship Detection) dataset [51]. These datasets comprehensively address a wide range of requirements, spanning from low-resolution to high-resolution imagery, small-scale to large-scale scenarios, and static detection to dynamic behavior analysis. They play a crucial role in advancing SAR-based ship detection technology by providing robust support for algorithm training, performance validation, and real-world applications. In this work, publicly available ship image datasets, including SSDD and LS-SSDD-v1.0, are used to verify the performances of the ISEM and ship detection. These datasets contain high-resolution SAR images covering near-shore, offshore, and complex backgrounds. Figure 8 shows some ship examples and corresponding ship bounding boxes.
The SSDD dataset contains 1160 SAR images with 2456 multi-scale ship targets. The data come from the radar satellites, such as Sentinel-1, TerraSAR-X, and RadarSat-2, and the radar polarization modes are HH, VV, VH, and HV. The spatial resolution is from 1 to 15 m, and the image sizes are (186 − 524) (214 − 668) 3. The LS-SSDD-v1.0 dataset contains 15 large-size SAR images with 6015 small ship targets. The data come from the Sentinel-1 radar satellite, and the radar polarization modes are VV and VH. The spatial resolution is 5 m or 20 m. To facilitate training of the model, 15 large-size SAR images of the LS-SSDD-v1.0 dataset were cropped to obtain 9000 sub-images with sizes of . The distributions of ship bounding boxes in the two datasets are shown in Figure 9. As it can be seen in Figure 9, the SSDD dataset contains many multi-scale ships, while the LS-SSDD-v1.0 dataset contains a large number of small ships.
Experiments were conducted on the NVIDIA Jetson Nano platform, which is a high-performance GPU with low power consumption. It is suitable for image processing tasks in edge computing environments. Python 3.8, OpenCV, and Numpy were selected for programming. The model was trained and inferenced in the TensorFlow 2.x framework. TensorRT was utilized for optimization and acceleration.
6.2. Results of SAR Image Denoising
To evaluate the performance of different filtering methods for SAR image denoising, this work selected mean filtering [32], median filtering [33], bilateral filtering [34], Lee filtering [35], and Frost filtering [36]. A fixed filter with a size of was uniformly used. The filtering results were evaluated based on the PSNR, the SSIM, and the EPI, as well as the running time. The experimental results of different filtering methods are shown in Figure 10 and Table 1.
From Figure 10, it can be seen that the effects of the mean filter were the worst, which blurred information and lost the ships’ edges. The median filter filled the small holes with low grayscale values by sorting and taking the median strategy. The bilateral filter and the Lee filter performed well for edge preservation. The Frost filter also caused blurs.
Table 1 shows the filtering results and evaluation indicators of each denoising method on the two images. The mean filter has the lowest index. The median filter is in the middle. The bilateral filter and the Lee filter are the best. For the running time, the Frost filter takes the longest, and the mean filter has the shortest time, followed by the Lee filter. The median filter is faster than other methods, except for the mean filter. Considering the real-time and filtering requirements, this work chose the median filter to effectively reduce the coherent speckle noise.
To explore the impact of the size of the filter on image denoising, edge preservation, and running time, this article used three sizes of , , and . Two single-channel SAR images were considered for median filtering. Figure 11 and Table 2 show the filtering results, the PSNR, the SSIM, the EPI, and the running time. The text in bold indicates the optimal result of each image under different window sizes.
It can be seen from Table 2 that although the evaluation metrics of the median filter are higher when the filter window size is , Figure 11 shows that the noise suppression of it is insufficient. On the contrary, when the filter window size is , the image is blurred, and the edge information is lost. Therefore, the filter with a size of was selected for the ISEM method.
6.3. Results of Image Saliency Enhancement
The effectiveness of the land recognition method based on a binary image histogram was verified using two datasets. The results show that the proposed land recognition method achieved higher accuracy and robustness in more than 95% of scenarios. By comparing the ratios of pixels with the grayscale values of 255 and 0 in the binary image with the threshold, the large area of land regions can be roughly recognized, as shown in Figure 12.
After confirming the existence of land, an accurate land mask is generated through land–sea segmentation to avoid land interference with ship detection. As shown in Figure 13, the land mask can accurately distinguish between land and sea and reduce the negative impacts of a complex background on ship detection, which significantly improves the detection accuracy and reliability.
The saliency enhancement results based on SR [39] are shown in Figure 14. The original images (Figure 14a) were affected by noise, sea clutter, and nearshore land, which are prone to false detection and missed detection. The 5 × 5 median filter (Figure 14b) can effectively weaken the coherent speckle noise with a low cost. The saliency maps (Figure 14c) removed the land, highlighted the ships’ information, and suppressed the background. The filtered images were combined with the saliency maps with weights of 0.5 and 2 to generate enhanced images (Figure 14d), which highlight ships and suppress noise and sea clutter. At the same time, the contrast was also improved. Experiments show that saliency enhancement improves the accuracy and robustness of ship detection.
To intuitively demonstrate the enhancement effect, the 3D images of the original images and the enhanced images were compared (Figure 15), where X and Y axes represent the width and height and the Z axis represents the grayscale value. In the original images (Figure 15b), the grayscale values of small ships are low and easily affected by sea clutter, which caused false detection and missed detection in the large land regions with high grayscale values. After being processed using the ISEM method (Figure 15d, the grayscale values of ships were significantly improved. The sea clutter was suppressed, and the land areas were completely removed. Therefore, the accuracy and robustness of ship detection in these enhanced images are significantly improved.
6.4. Light-Weight Ship Detection
This section compares the performances of the ISEM-based YOLOX-Tiny model with other CNN-based object detection models, including anchor-based models (Faster R-CNN [44], RetinaNet [45], and SSD 512 [46]) and anchor-free-based models (YOLOX-Tiny [31], CenterNet [47], and FCOS [48]), on the SSDD and LS-SSDD-v1.0 datasets, as shown in Table 3 and Table 4. During the training stage, the datasets are randomly divided into training, validation, and test sets using a division ratio of 7:2:1, and the optimizer uses stochastic gradient descent. The initial learning rate is set to 0.001. The batch size is set to 8. The weight decay and momentum are 0.0005 and 0.9, respectively, and the training epoch is set to 300. All experiments are trained on a Windows-10-based computer with an Intel Core i5-12400F CPU and an NVIDIA GeForce GTX 1660 SUPER GPU. The comparison of models’ metrics, including the number of parameters, floating-point operations per second (FLOPs), and model size, is shown in Figure 16. Experimental results show that the P, R, F1, AP, and FPS of the ISEM+YOLOX-Tiny model on two datasets are 98.25%, 97.29%, 98.38%, 97.96%, 35.31, 83.25%, 80.77%, 82.89%, 81.38%, and 25.63, respectively. ISEM significantly improves the detection accuracy of the YOLOX-Tiny model, which outperforms other models and improves the efficiency on edge computing platforms. The ISEM+YOLOX-Tiny model achieves the lowest values among other modules.
Figure 17 and Figure 18 show the performance of each ship detection model on the SSDD and LS-SSDD-v1.0 datasets. On the SSDD dataset (Figure 17), there are many false detections for the Faster R-CNN, SSD 512, and CenterNet models. The RetinaNet model missed some ships. On the LS-SSDD-v1.0 dataset (Figure 18), due to the high proportion of small ships and the interference of sea clutter and noise, the Faster R-CNN, SSD 512, and YOLOX-Tiny models all missed detections. The sea clutter, coherent speckle noise, and land areas also led to false detections. In contrast, the ISEM+YOLOX-Tiny model effectively reduced the false detections and missed detections through the ISEM and achieved high accuracy on both datasets.
To evaluate the impacts of light-weight architecture on the performance of the ISEM, two experiments were conducted on the NVIDIA Jetson Nano platform. The first one used the original model, and the second one used an optimized model with a series of light-weight processes, including quantization, simplified Fourier transform, optimized Gaussian smoothing, fast connected domain labeling, and simplified normalization. Experimental results show that the optimized model significantly improved the inference time, speed, processing capabilities of high-resolution images, and power consumption. The specific data are shown in Table 5.
7. Conclusions
This article proposes a light-weight ISEM based on sea–land segmentation preference. It aims to enhance the visual saliency of maritime targets and suppress redundant backgrounds, thereby improving the accuracy and speed of SAR image target detection. The proposed method combines denoising, adaptive land segmentation, accurate mask generation, and SR-based saliency enhancement, and it is applied to ship detection. Experiments show that detection performances in complex backgrounds and changeable meteorological conditions are significantly improved. The main contributions of the ISEM are as follows:
(1). The limitations of traditional and CNN methods are handled. The low accuracy and the poor generalization ability of traditional methods and the high complexity of CNN methods are all resolved. Light-weight design, parallel computing, hardware acceleration, efficient reasoning, and low power consumption are considered to achieve high efficiency in edge computing environments.
(2). The detection accuracy and robustness are improved. Image preprocessing and saliency enhancement are utilized to suppress the noise and sea clutter effectively. It is key to improving the detection accuracy of the model on the SSDD and LS-SSDD-v1.0 datasets with complex backgrounds.
(3). The computational resource consumption is reduced. The light-weight design increases the speed by about three times and reduces the model size and inference time by 70% and 50%, respectively.
(4). Data transmission consumption is reduced. During the deployment of spaceborne SAR, the slice images contain the key ships are transmitted instead of the entire image, which significantly reduces data transmission bandwidth and improves efficiency and cost-effectiveness of satellite running.
Conceptualization, H.Y. and K.Y.; methodology, K.Y.; software, C.L.; validation, H.Y., K.Y. and L.W.; formal analysis, L.W.; investigation, T.L.; resources, T.L.; data curation, C.L.; writing—original draft preparation, K.Y.; writing—review and editing, H.Y. and L.W.; visualization, L.W.; supervision, T.L.; project administration, L.W.; funding acquisition, L.W. All authors have read and agreed to the published version of the manuscript.
The raw data supporting the conclusions of this article will be made available by the authors on request.
Author Teng Li was employed by the company Hainan Weixing Remote Sensing Technology Application Service Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 18. Comparison of experimental results based on the LS-SSDD-v1.0 [50] dataset.
Comparison of four metrics using different filter methods.
Filter | Image Size | PSNR (dB) | SSIM | EPI | Time (s) |
---|---|---|---|---|---|
Mean filter [ | 512 × 512 | 30.445 | 0.731 | 0.443 | 0.0037 |
800 × 800 | 21.469 | 0.571 | 0.340 | 0.0073 | |
Median filter [ | 512 × 512 | 31.535 | 0.792 | 0.487 | 0.0078 |
800 × 800 | 22.499 | 0.637 | 0.372 | 0.0110 | |
Bilateral filtering [ | | 32.288 | 0.843 | 0.495 | 0.0846 |
| 24.138 | 0.755 | 0.413 | 0.1357 | |
Lee filter [ | | 32.931 | 0.825 | 0.512 | 5.5143 |
| 22.640 | 0.667 | 0.389 | 13.0928 | |
Frost filter [ | | 31.563 | 0.819 | 0.483 | 9.1715 |
| 22.329 | 0.648 | 0.355 | 20.4392 |
Comparison of four metrics using different filter sizes.
Filter Size | Image Size | PSNR (dB) | SSIM | EPI | Time (s) |
---|---|---|---|---|---|
| | 33.318 | 0.881 | 0.626 | 0.0024 |
| 23.413 | 0.773 | 0.521 | 0.0058 | |
| | 31.535 | 0.792 | 0.487 | 0.0078 |
| 22.499 | 0.637 | 0.372 | 0.0110 | |
| | 28.140 | 0.615 | 0.296 | 0.0144 |
| 19.812 | 0.336 | 0.241 | 0.0320 |
Comparison of model performance based on SSDD [
Anchor Box | Detection Model | P (%) | R (%) | F1 (%) | AP (%) | FPS |
---|---|---|---|---|---|---|
With Anchor Boxes | Faster R-CNN [ | 85.11 | 93.13 | 87.27 | 90.77 | 8.65 |
RetinaNet [ | 97.05 | 94.81 | 95.92 | 94.85 | 11.53 | |
SSD 512 [ | 92.30 | 94.53 | 93.40 | 95.28 | 12.28 | |
Without Anchor Boxes | FCOS [ | 94.38 | 94.90 | 94.63 | 93.95 | 13.70 |
CenterNet [ | 95.59 | 92.53 | 94.04 | 94.32 | 14.12 | |
YOLOX-Tiny [ | 96.51 | 95.67 | 96.09 | 96.44 | 30.31 | |
ISEM+YOLOX-Tiny | 98.25 | 97.29 | 98.38 | 97.96 | 35.31 |
Comparison of model performance based on LS-SSDD-v1.0 [
Anchor Box | Detection Model | P (%) | R (%) | F1 (%) | AP (%) | FPS |
---|---|---|---|---|---|---|
With Anchor Boxes | Faster R-CNN [ | 73.81 | 72.46 | 73.13 | 72.95 | 5.54 |
RetinaNet [ | 81.77 | 73.26 | 77.28 | 76.15 | 7.38 | |
SSD 512 [ | 76.85 | 77.59 | 77.22 | 78.63 | 7.86 | |
Without Anchor Boxes | FCOS [ | 79.52 | 77.85 | 78.68 | 77.14 | 8.77 |
CenterNet [ | 80.54 | 75.93 | 78.17 | 77.90 | 9.04 | |
YOLOX-Tiny [ | 81.32 | 78.51 | 79.89 | 79.21 | 19.40 | |
ISEM+YOLOX-Tiny | 83.25 | 80.77 | 82.89 | 81.38 | 25.63 |
Comparison of indicators between the light-weight architecture and the original model.
Metrics | Original 3 Models | Light-Weight Architecture | Improvements |
---|---|---|---|
Inference time | About 375 milliseconds/frame | About 150 milliseconds/frame | Reduced by about 60% |
Average inference speed | About 2.67 frames/second | About 6.67 frames/second | Increased by about 150% |
High-resolution image processing | Long, insufficient real-time performance | Real-time processing time greatly reduced | Significantly improved |
Image processing | Insufficient real-time performance | Significantly shortened inference time | Significant improvement in capability |
Power | About 10 watts | About 7 watts | Reduced by about 30% |
References
1. Bao, Z.; Xing, M.D.; Wang, T. Radar Imaging Technology; Electronics Industry Press: Beijing, China, 2005.
2. Moreira, A.; Prats-Iraola, P.; Younis, M.; Krieger, G.; Hajnsek, I.; Papathanassiou, K.P. A tutorial on synthetic aperture radar. IEEE Geosci. Remote Sens. Mag.; 2013; 1, pp. 6-43. [DOI: https://dx.doi.org/10.1109/MGRS.2013.2248301]
3. Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep learning in remote sensing: A comprehensive review and list of resources. IEEE Geosci. Remote Sens. Mag.; 2017; 5, pp. 8-36. [DOI: https://dx.doi.org/10.1109/MGRS.2017.2762307]
4. Gao, G. Research on Automatic Acquisition Technology of SAR Image Target ROI. Ph.D. Thesis; National University of Defense Science and Technology: Changsha, China, 2007.
5. Ding, B.; Wen, G.; Ma, C.; Yang, X. An efficient and robust framework for SAR target recognition by hierarchically fusing global and local features. IEEE Trans. Image Process.; 2018; 27, pp. 5983-5995. [DOI: https://dx.doi.org/10.1109/TIP.2018.2863046] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30080149]
6. Feng, S.; Ji, K.; Wang, F.; Zhang, L.; Ma, X.; Kuang, G. PAN: Part attention network integrating electromagnetic characteristics for interpretable SAR vehicle target recognition. IEEE Trans. Geosci. Remote Sens.; 2023; 61, 5204617. [DOI: https://dx.doi.org/10.1109/TGRS.2023.3256399]
7. Yang, D. Research on Sparse Imaging and Moving Target Detection Processing Method for Satellite. Ph.D. Thesis; Xidian University: Xi’an, China, 2015.
8. Lee, J.S.; Jurkevich, I. Speckle noise reduction in synthetic aperture radar images. IEEE Trans. Geosci. Remote Sens.; 1990; 28, pp. 38-46.
9. Liu, Y.; Zhang, L.; Wei, H. A novel ship detection method in SAR images based on saliency detection and deep learning. Remote Sens.; 2020; 12, 407.
10. Xie, Y.; Wang, J.; Yu, T. Ship detection in SAR images based on saliency detection and extreme learning machine. IEEE Access; 2019; 7, pp. 35608-35615.
11. Borji, A.; Itti, L. State-of-the-art in visual attention modeling. IEEE Trans. Pattern Anal. Mach. Intell.; 2013; 35, pp. 185-207. [DOI: https://dx.doi.org/10.1109/TPAMI.2012.89]
12. Liu, N.; Zhang, L.; Tang, X. Saliency detection-oriented image enhancement. IEEE Trans. Image Process.; 2017; 26, pp. 2658-2672.
13. Achanta, R.; Hemami, S.; Estrada, F.; Süsstrunk, S. Frequency-tuned salient region detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Miami, FL, USA, 20–25 June 2009.
14. Gu, X.; Zhang, L. A novel saliency detection method based on multi-scale deep features. IEEE Trans. Multimed.; 2018; 20, pp. 2081-2093.
15. Li, H.; Lu, H.; Zhang, L. Saliency detection via graph-based manifold ranking. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Portland, OR, USA, 23–28 June 2013.
16. Shi, C.; Zhang, X.; Sun, J.; Wang, L. Remote Sensing Scene Image Classification Based on Dense Fusion of Multi-level Features. Remote Sens.; 2021; 13, 4379. [DOI: https://dx.doi.org/10.3390/rs13214379]
17. Guo, R.; Sim, C.H. Retinex theory for image enhancement via recursive filtering. IEEE Trans. Image Process.; 2022; 31, pp. 4107-4120.
18. Chen, Y.; Fang, X.; Ma, K. Local contrast enhancement based on adaptive local ternary pattern. IEEE Trans. Image Process.; 2023; 32, pp. 1-14.
19. Heijmans, H.J.; Roerdink, J.B. Mathematical morphology: A modern approach to image processing based on algebra and geometry. J. Vis. Commun. Image Represent.; 1998; 8, pp. 348-349. [DOI: https://dx.doi.org/10.1137/1037001]
20. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature; 2015; 521, pp. 436-444. [DOI: https://dx.doi.org/10.1038/nature14539] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26017442]
21. Liu, L.; Ouyang, W.; Wang, X.; Fieguth, P.; Chen, J.; Liu, X.; Pietikäinen, M. Deep learning for generic object detection: A survey. Int. J. Comput. Vis.; 2020; 128, pp. 261-318. [DOI: https://dx.doi.org/10.1007/s11263-019-01247-4]
22. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv; 2014; arXiv: 1409.1556
23. Wang, T.; Chen, W.; Luo, X. Deep retinex decomposition with dense inception residual network for low-light image enhancement. IEEE Trans. Instrum. Meas.; 2022; 71, pp. 1-12.
24. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst.; 2014; 27, pp. 2672-2680.
25. Liu, F.; Shen, J. GAN-based image super-resolution with salient region detection and attention mechanism. IEEE Trans. Circuits Syst. Video Technol.; 2022; 32, pp. 5818-5830.
26. He, Y.; Yao, L.P.; Li, G.; Liu, Y.; Yang, D.; Li, W. On-orbit fusion processing and analysis of multi-source satellite information. J. Astronaut.; 2021; 42, pp. 1-10.
27. Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge computing: Vision and challenges. IEEE Internet Things J.; 2016; 3, pp. 637-646. [DOI: https://dx.doi.org/10.1109/JIOT.2016.2579198]
28. Bian, C.J. Research on In-Orbit Real-Time Detection and Compression Technology of Effective Area for Optical Remote Sensing Images. Ph.D. Thesis; Harbin Institute of Technologym: Harbin, China, 2019.
29. Wang, Z.K.; Fang, Q.Y.; Han, D.P. Research progress of in-orbit intelligent processing technology for imaging satellites. J. Astronaut.; 2022; 43, pp. 259-270.
30. Redmon, J.; Divvala, S.; Girshick, R.B.; Farhadi, A. You only look once: Unified, real-time object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016; Las Vegas, NV, USA, 27–30 June 2016; pp. 779-788. [DOI: https://dx.doi.org/10.1109/CVPR.2016.91]
31. Ge, Z.; Liu, S.; Zeming, L.; Jian, S. YOLOX: Exceeding YOLO Series in 2021. arXiv; 2021; arXiv: 2107.08430
32. Pitas, I.; Venetsanopoulos, A. Nonlinear mean filters in image processing. IEEE Trans. Acoust. Speech Signal Process.; 1986; 34, pp. 573-584. [DOI: https://dx.doi.org/10.1109/TASSP.1986.1164857]
33. Huang, T.; Yang, G.; Tang, G. A fast two-dimensional median filtering algorithm. IEEE Trans. Acoust. Speech Signal Process.; 1979; 27, pp. 13-18. [DOI: https://dx.doi.org/10.1109/TASSP.1979.1163188]
34. Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. Proceedings of the IEEE International Conference on Computer Vision; Bombay, India, 4–7 January 1998; pp. 839-846.
35. Lee, J.-S. Digital image enhancement and noise filtering by use of local statistics. IEEE Trans. Pattern Anal. Mach. Intell.; 1980; PAMI-2, pp. 165-168. [DOI: https://dx.doi.org/10.1109/TPAMI.1980.4766994] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/21868887]
36. Frost, V.S.; Stiles, J.A.; Shanmugan, K.S.; Holtzman, J.C. A model for radar images and its application to adaptive digital filtering of multiplicative noise. IEEE Trans. Pattern Anal. Mach. Intell.; 1982; PAMI-4, pp. 157-166. [DOI: https://dx.doi.org/10.1109/TPAMI.1982.4767223] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/21869022]
37. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern.; 1979; 9, pp. 62-66. [DOI: https://dx.doi.org/10.1109/TSMC.1979.4310076]
38. Sauvola, J.; Pietikainen, M. Adaptive document image binarization. Pattern Recognit.; 2000; 33, pp. 225-236. [DOI: https://dx.doi.org/10.1016/S0031-3203(99)00055-2]
39. Hou, X.; Zhang, L. Saliency detection: A spectral residual approach. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Minneapolis, MN, USA, 17–22 June 2007; pp. 1-8.
40. Redmon, J.; Farhadi, A. YOLOv3: An incremental improvement. arXiv; 2019; arXiv: 1804.02767
41. Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Honolulu, HI, USA, 21–26 July 2017; pp. 6517-6525.
42. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process.; 2004; 13, pp. 600-612. [DOI: https://dx.doi.org/10.1109/TIP.2003.819861]
43. Sattar, F.; Floreby, L.; Salomonsson, G.; Lovstrom, B. Image enhancement based on a nonlinear multiscale method. IEEE Trans. Image Process.; 1997; 6, pp. 888-895. [DOI: https://dx.doi.org/10.1109/83.585239] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/18282982]
44. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell.; 2017; 39, pp. 1137-1149. [DOI: https://dx.doi.org/10.1109/TPAMI.2016.2577031] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27295650]
45. Lin, T.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell.; 2020; 42, pp. 318-327. [DOI: https://dx.doi.org/10.1109/TPAMI.2018.2858826] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30040631]
46. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multibox detector. European Conference on Computer Vision; Springer: Cham, Switzerland, 2016; pp. 21-37.
47. Duan, K.; Bai, S.; Xie, L.; Qi, H.; Huang, Q.; Tian, Q. CenterNet: Keypoint triplets for object detection. Proceedings of the IEEE/CVF International Conference on Computer Vision 2019; Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6568-6577.
48. Tian, Z.; Shen, C.; Chen, H.; He, T. FCOS: Fully convolutional one-stage object detection. Proceedings of the IEEE/CVF International Conference on Computer Vision 2019; Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9626-9635.
49. Zhang, T.; Zhang, X.; Li, J.; Xu, X.; Wang, B.; Zhan, X.; Xu, Y.; Ke, X.; Zeng, T.; Su, H. et al. SAR ship detection dataset (SSDD): Official release and comprehensive data analysis. Remote Sens.; 2021; 13, 3690. [DOI: https://dx.doi.org/10.3390/rs13183690]
50. Zhang, T.; Zhang, X.; Ke, X.; Zhan, X.; Shi, J.; Wei, S.; Pan, D.; Li, J.; Su, H.; Zhou, Y. et al. LS-SSDD-v1.0: A deep learning dataset dedicated to small ship detection from large-scale Sentinel-1 SAR images. Remote Sens.; 2020; 12, 2997. [DOI: https://dx.doi.org/10.3390/rs12182997]
51. Zhang, T.; Zhang, X.; Ke, X.; Zhan, X.; Shi, J.; Wei, S.; Pan, D.; Li, J.; Su, H.; Zhou, Y. et al. HRSID: A High-Resolution SAR Images Dataset for Ship Detection and Instance Segmentation. IEEE Access; 2020; 8, pp. 120234-120254.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
With the advantages of wide range, constant observation ability, and an active imaging mechanism, synthetic aperture radar (SAR) has been a preferrable choice for ship detection in complicated scenarios. However, existing algorithms, especially for the convolutional neural network (CNN), cannot achieve satisfactory accuracy and generalization ability. Moreover, the complex architectures limit their real-time performances on the embedding or edge computing platforms. To handle these issues, this article proposes a light-weight image saliency enhancement method (ISEM) based on sea–land segmentation preference for ship detection. First, the interfering land regions are recognized adaptively based on the binary histogram of the denoised image. To distinguish ships from redundant backgrounds, a spectral residual method is next introduced to generate the saliency map in the frequency domain. Both the saliency map and the previous denoised image are fused to improve the final result further. Finally, by integrating parallel computing and hardware acceleration, the proposed method can be deployed on edge computing platforms with limited resources. Experimental results reveal that the proposed method with less parameters reaches higher detection accuracy and runs three times faster compared with CNNs.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details


1 School of Aerospace Science and Technology, Xidian University, Xi’an 710126, China;
2 Key Laboratory of Earth Observation of Hainan Province, Hainan Aerospace Information Research Institute, Sanya 572029, China; Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
3 Hainan Weixing Remote Sensing Technology Application Service Co., Ltd., Sanya 572022, China;