Abstract: Nowadays, with the rapid development of quantitative remote sensing represented by high-resolution UAV hyperspectral remote sensing observation technology, people have put forward higher requirements for the rapid preprocessing and geometric correction accuracy of hyperspectral images. The optimal geometric correction model and parameter combination of UAV hyperspectral images need to be determined to reduce unnecessary waste of time in the preprocessing and provide high-precision data support for the application of UAV hyperspectral images. In this study, the geometric correction accuracy under various geometric correction models (including affine transformation model, local triangulation model, polynomial model, direct linear transformation model, and rational function model) and resampling methods (including nearest neighbor resampling method, bilinear interpolation resampling method, and cubic convolution resampling method) were analyzed. Furthermore, the distribution, number, and accuracy of control points were analyzed based on the control variable method, and precise ground control points (GCPs) were analyzed. The results showed that the average geometric positioning error of UAV hyperspectral images (at 80 m altitude AGL) without geometric correction was as high as 3.4041 m (about 65 pixels). The optimal geometric correction model and parameter combination of the UAV hyperspectral image (at 80 m altitude AGL) used a local triangulation model, adopted a bilinear interpolation resampling method, and selected 12 edge-middle distributed GCPs. The correction accuracy could reach 0.0493 m (less than one pixel). This study provides a reference for the geometric correction of UAV hyperspectral images.
Keywords: geometric correction, hyperspectral images, unmanned aerial vehicle (UAV), ground control point (GCP)
(ProQuest: ... denotes formulae omitted.)
1 Introduction
Unmanned aerial vehicle (UAV) hyperspectral remote sensing has become one of the hotspots in the development of quantitative remote sensing[12]. However, the UAV hyperspectral imager is affected by various factors, such as sensors and imaging environment, which makes the positioning accuracy of the mosaic hyperspectral images poor in the geographic projection, resulting in a deviation between hyperspectral images and actual geographical location^ In addition, with the rapid development of quantitative remote sensing and high-resolution remote sensing observation technology[4], people have put forward higher requirements for the positioning accuracy of remote sensing images, hoping that remote sensing image pixels with high-resolution characteristics can accurately locate ground features. Therefore, it is necessary to achieve accurate geocoding for the hyperspectral images using precision geometric correction^6], which supports subsequent geometric measurement, mutual comparison, and application research.
However, at present, studies on geometric correction methods mainly focus on satellite remote-sensing images. Firstly, some scholars used a geometric correction model to explore the feasibility of geometric correction of satellite remote sensing images. For example, using the rational function model, Wang et al.[7] and Ma et al.[8] performed geometric correction on GF-3 and GF-2 satellite images, respectively. Secondly, some scholars studied the geometric correction accuracy of satellite remote sensing images from the perspective of the geometric correction model and control point selection. For example, in terms of geometric correction models, Zhong[9] discussed various general geometric correction models and their solution methods for high-resolution QuickBird satellite images. Wang et al.[10] and Wang et al.[11] explored the effects of different geometric correction methods on the positioning accuracy of GF-2 and GF-6 images, respectively. Furthermore, Eltohamy et al.[12], Wang et al.[13], and Babiker et al.[14] tested the influence of the distribution of ground control points (GCPs) on the geometric correction accuracy of remote sensing images. Some scholars also discussed the geometric correction effect of the geometric correction model and control points on different satellite images. For example, Guo[15] and Jiang et al.[16] used polynomial models, local triangulation models, rational function models, and control points of different numbers to determine a reasonable geometric correction method for the panchromatic image of the Jilin-1 Optical A satellite.
Overall, in satellite remote sensing engineering applications, only a geometric correction model was used to achieve the purpose of geometric correction. However, no basis for selecting this correction model was established, and the accuracy of different correction models was not tested. The research on the influence factors of geometric correction accuracy is not comprehensive. In addition, there are significant differences between satellite remote sensing and UAV hyperspectral remote sensing in terms of flight platform, flight height, image area, etc.[17-20]. Therefore, it is blindness to use the satellite remote sensing image geometric correction method directly for the UAV hyperspectral remote sensing image. The current UAV hyperspectral images lack comprehensive geometric correction model testing and quantitative evaluation of geometric correction parameters and geometric correction accuracy.
Therefore, this study aimed to further test the availability of geometric correction models on UAV hyperspectral images and to comprehensively analyze the influencing factors for the geometric correction accuracy of UAV hyperspectral images. The geometric correction model, resampling method, the number, distribution, and accuracy of GCPs, UAV flight altitude, and efficiency of geometric correction were taken as variables according to the main steps of geometric correction[21] and the characteristics of UAV hyperspectral image acquisition[22] . The geometric correction results of the control variables were compared. The comparison combined with the accuracy and applicability requirements of the UAV hyperspectral image application to determine the optimal geometric correction model and parameter combination of UAV hyperspectral images. Thus, unnecessary waste of time in preprocessing can be reduced, and high-precision data support can be provided for UAV hyperspectral remote sensing image applications.
2 Materials and methods
2.1 Device description
In this study, the UAV is DJI Matrice 600 Pro hexacopter UAV (Shenzhen, China), its hovering accuracy is less than 0.5 m in the vertical direction and less than 1.5 m in the horizontal direction. Hyperspectral is Rikola hyperspectral imager (Finland's National Technology Research Center (VTT), Finland). It is a lightweight spectral camera of about 720 g based on the Fabry-Pero Interferometer[23]. The Rikola hyperspectral image size is 1010×1010 pixels and 42 bands were selected in the 400-900 nm spectral range with equal intervals in this experiment. As shown in Figure 1, the Rikola camera is mounted on a stabilized DJI RoninMX gimbal, such that the sensor remains nadir facing throughout the flight.
In addition, CHCNAV i70 miniature intelligent RTK (Shanghai, China) is used to obtain high-precision GCPs, its horizontal accuracy is less than 2 cm and vertical accuracy is less than 5 cm when using a difference system. Besides, four standard reference panels (G8T Inc, Utah, USA) are used for reflectance correction of hyperspectral images, its reflectance includes 3%, 22%, 48%, and 64%. Three-bar targets[24] are used to detect the local difference of hyperspectral images after resampling.
2.2 Experiment area and experimental design
The experiment area (Figure 2) is a farmland located in Shawan City, Xinjiang Uygur Autonomous Region, China. The ground is flat and broad with a slope of less than 2°. The main features of this farmland are bottle gourd, cotton, and bare soil.
The experiment's flight altitude above ground level (AGL) is 60 m, 80 m, and 100 m, respectively, taking into account the limit of flight altitude (120 m) and the usual low-flight altitude of UAVbased hyperspectral[25-28]. In order to ensure the quality of the image mosaic, all hyperspectral data were captured with an 80% alongtrack overlap and 75% sidelap. Besides, the farmland has no obvious features that can be used as GCPs, and the spatial resolution of the Rikola hyperspectral imager is 6.5 cm when the flight altitude is 100 m AGL. Therefore, a landmark disc was used with a diameter of 20 cm as a GCP, and they were approximately uniformly distributed in the experimental area. Finally, the highprecision latitude and longitude information of 35 GCPs was collected using RTK based on a difference system in this experiment.
2.3 Data preprocessing
Firstly, the original hyperspectral image was transformed into a single hyperspectral image cube by dark current correction, lens vignetting correction, band registration, and format conversion^30]. At the same time, the latitude and longitude coordinates of each hyperspectral image were derived by the Rikola hyperspectral imager. Then, all single hyperspectral images were mosaicked using Agisoft Photoscan Professional software (https://www.agisoft.com/) based on latitude and longitude information. In this experiment, the mosaic hyperspectral image is about 180 m long and 160 m wide. Finally, radiometric correction of the mosaic hyperspectral image was completed using four standard reference panels[31]. Besides, the mosaic hyperspectral image has only latitude and longitude information without projection coordinate information, geometric correction and error testing cannot be performed. Therefore, the projection conversion function in the ENVI software was used and the universal Mercator projection (UTM projection) was used to obtain a hyperspectral image with latitude and longitude coordinates and projection coordinate information.
2.4 Geometric correction model
2.4.1 Affine transformation (Rotation-Scale-Transformation, RST) model
Based on the parallel projection theory, the affine transformation model uses affine transformation parameters a1-a8 to establish the mathematical relationship between the image plane coordinates (x, y) of the image point and the object space coordinates (X, Y, Z) of the corresponding ground point. The equation is as follows:
... (1)
where, there are two translation parameters, three rotation parameters, and three image deformation parameters in a1-a8.
2.4.2 Local triangulation model
The local triangulation model[32] divides the entire image into blocks for correction and uses an affine transformation model to correct each block to eliminate uneven image errors. The model uses the selected control points to establish a local triangulation network and then uses the geographic coordinates of the control points of the three vertices of the triangulation network and the image pixel image coordinates to obtain the affine transformation parameters. Since the triangulation can only be established in the area controlled by the control point, the disadvantage of the local triangulation model is that the area outside the control point cannot be geometrically corrected to improve the accuracy. The established local triangulation usually has the following characteristics:
1) No matter which points are connected to form a triangulation network, the final result of the triangulation network is the same;
2) The circumscribed circle of any triangle in the triangulation network will not contain any point except the three points constituting the triangle. The circumscribed circle of any triangle is shown in Figure 3.
2.4.3 General polynomial model
The general polynomial model is a simple empirical model that mathematically simulates image deformation. The model treats the overall deformation of the image as a result of the combined effects of scaling, translation, rotation, affine, deflection, bending, and higher-level basic deformation. The model also uses an appropriate polynomial to express the mathematical relationship between the object space coordinates (X, Y, Z) of the ground point and the image plane coordinates (x, y) of the corresponding image point:
... (2)
where, the number N of polynomial coefficients ai and bi has a fixed relationship with the polynomial order n: N=(n+1.(n+2)/2, and the number of GCPs in the correction process is at least N. In addition, the common general polynomial models include the 1st-order polynomial model, 2nd-order polynomial model, 3rd-order polynomial model, and 4th-order polynomial model, and the 1st-order polynomial formula is the same as the affine transformation formula.
2.4.4 Direct linear transformation (DLT) model
The DLT model is essentially derived from the collinear equation, which can be known from the general form of the collinear equation:
... (3)
where, (x0, y0) are coordinates of the principal point of a photograph; ai, bi, and ci, (i=1, 2, 3) represent the direction cosine of the image space coordinate system relative to the object space coordinate system; X, Y, and Z are coordinates of GCPs; XS) Ys, and Zs denote the coordinates of the camera station.
First, it is supposed that the pixel coordinates of a certain control point on the remote sensing image are (x, y), and the relationship after linear correction is
... (4)
The linear error mainly includes the linear error caused by the deformation of the remote sensing image and the internal factors of the sensor, and αi and ßi are the correction numbers caused by the translation of the coordinate origin. Equation (5) is obtained by substituting Equation (4) into Equation (3).
... (5)
Equation (5) after a series of transformations:
... (6)
Equation (6) is the basic formula of the DET model. There are 11 parameters in the equation. At least 6 GCPs are needed to solve the DET model.
2.4.5 Rational function model (REM)
The RFM expresses the remote sensing image point coordinates (r, c) as the ratio of the polynomials with the corresponding GCP spatial coordinates (X, Y, Z) as the independent variable, as shown in the following equation:
... (7)
where, the (rn, cn) and (Xn, Yn, Zn) respectively represent the pixel coordinates and the corresponding ground point coordinates after the standardized dimensionless coordinates of translation (X, Y, Z) and scaling. The values range from -1 to 1. The transformation relationship is
... (8)
where, (X, Y, Z) is the original coordinate, (Z0, Y0, X0, r0, c0) is the standardized translation parameter, and (Zn, Yn, Xn, rn, cn) is the standardized proportional parameter. The rounding error caused by the excessive level of data can be reduced during the calculation to improve the accuracy of the calculation result.
The form of polynomial PI (X, Y, Z) (i=1, 2, 3, 4) is as follows:
... (9)
The coefficients ai of the polynomial in Equation (9) are called rational polynomial coefficients (RFCs); the power of all coordinate components X, Y, and Z of each term in the equation cannot exceed 3, and the sum of the powers of all coordinate components in each term also cannot exceed 3 (usually there can be three values of 1, 2, and 3)[33].
2.5 Resampling method
In general, there is no brightness value for a non-integer point in the original image array if the position coordinate value of the projection point following coordinate transformation is not an integer. As a result, the brightness value of the nearby point should be used in conjunction with the appropriate resampling method to calculate the brightness value of the point. The most typical resampling methods are the nearest neighbor resampling method, the bilinear interpolation resampling method, and the cubic convolution resampling method[34-36].
2.6 Accuracy evaluation
In the precision evaluation of the experimental results, Δxz and Δyi are the residuals in the x- and y- directions of the i-th checkpoints respectively, as Equation (10).
... (10)
where, (x, y) is the actual coordinates of the checkpoint; (x', y') is the coordinates of the checkpoint in the image.
Root mean square error (RMSE) of the z-th checkpoints:
... (11)
The accuracy of the geometric correction of the remote sensing image increases with decreasing checkpoint median error. The median error of the checkpoint is the square root of the mean squared error for all checkpoints. The median error of all checkpoints in the x- and y-direction is
... (12)
... (13)
Error Contribution by Point is normalized values representing each point's RMSE in relation to the total RMSE[37], error contribution of GCP, is
... (14)
where, Ri is the RMSE for GCPi, and TRMSE is the total RMS error for all GCPs.
2.7 Data processing platform
In this study, the data were processed using the ThinkPad P52s notebook computer. The detailed parameters of the computer are as follows: the processor is an Intel (R) Core (TM) i7-8550U [email protected] GHz, the running memory is 32 GB, the display card is an NVIDIA Quadro P500 with 12 GB, and the operating system is Windows 10.
3 Results
3.1 Original positioning accuracy of Rikola hyperspectral image
The original positioning accuracy of the hyperspectral image was detected using 35 GCPs collected on the ground. As shown in Figure 4, the average residual of x- and jz-directions of UAV hyperspectral images at 60 m flight altitude are 2.7879 m and 1.6134 m, respectively, and the RMSEs of all control points is 3.2648 m (about 83 pixels). The average residual of x- and ydirections of UAV hyperspectral images at 80 m flight altitude are 1.3843 m and 3.0521 m, respectively, and the root mean square errors of all control points are 3.4041 m (about 65 pixels). The average residual of x- and y-directions of UAV hyperspectral images at 100 m flight altitude are -2.1464 m and 3.0133 m, respectively, and the RMSEs of all control points are 3.995 m (about 61 pixels).
When the UAV flight altitudes are 60 m, 80 m, and 100 m AGE, the spatial resolutions of the Rikola hyperspectral images are 0.039 m, 0.052 m, and 0.065 m, respectively. This study refers to specifications for office operation of low-altitude digital aerophotogrammetry[38] and for flat areas such as farmland in this study. The median error of the ground checkpoints should not be more than 0.6 m on a digital orthophoto map with a mapping scale of 1:500 (spatial resolution 0.05 m), and it should not be more than 1.2 m on a digital orthophoto map with a mapping scale of 1:1000 (spatial resolution 0.1 m). The original Rikola hyperspectral image cannot meet the mapping requirements for photogrammetry.
3.2 Comparison of different resampling methods and processing efficiency in the geometric correction
UAV Rikola hyperspectral image at 80 m flight altitude AGE and 35 GCPs were used to compare different resampling methods and processing efficiency in geometric correction. As shown in Figure 5, when the same geometric correction model uses different resampling methods, there are obvious differences in the contour information of the three-bar target[39]. Among them, the contour information of the hyperspectral image sampled by the nearest neighbor resampling method has the most obvious change. The seven geometric correction models will produce different degrees of deformation. However, there is no obvious visual difference between the resampling images of bilinear interpolation and cubic convolution.
The geometric correction efficiency is listed in Table 1. Under the same geometric correction model, the time spent on the geometric correction of hyperspectral images using nearest neighbor, bilinear interpolation, and cubic convolution resampling methods increases sequentially, but the increase is small. When using the same sampling method, the time of correcting hyperspectral images with different geometric correction models varies greatly. For example, the difference between DET and RFM geometric correction time and RST, local triangulation, 2nd-order polynomial, 3rd-order polynomial, and 4th-order polynomial geometric correction time is about six times. However, the geometric correction time of RST, local triangulation, 2nd-order polynomial, 3rd-order polynomial, and 4th-order polynomial is similar, and the geometric correction time of DET and RFM is similar.
3.3 Comparison of correction accuracy for different geometric correction models
The bilinear interpolation resampling method is more efficient as can be shown in Section 3.2, and there is no visual difference between the resampling images of the bilinear interpolation and cubic convolution but is better than the nearest neighbor. Therefore, the UAV Rikola hyperspectral image at 80 m flight altitude AGE and the more efficient bilinear interpolation resampling method were used to compare the geometric correction accuracy of different correction models. In order to avoid the phenomenon of unreliable accuracy evaluation caused by too few checkpoints, nine uniformly distributed checkpoints (A2, A7, A9, A14, A20, A22, A26, A30, and A32) were selected to evaluate the experimental results. The remaining 26 points were selected as GCPs. As shown in Figure 6a, the error values in the x- and y-directions of all acquired points range from -0.1 to 0.1, and the RMSE range from 0 to 0.1, which meets the accuracy requirements of geometric correction control points. Furthermore, different geometric correction models were used for the errors of nine checkpoints for the hyperspectral image after geometric correction. The results are shown in Figures 6b-6d. Compared with the other six geometric correction models, the RFM has the lowest correction accuracy, and the RMSE of checkpoints can reach 10-16 m. Except for the A2 checkpoint of the 4th-order polynomial model, the correction errors of all checkpoints of the other six geometric correction models in the x- and y-directions range from -0.5 to 0.5, and the RMSE of the other six geometric correction models ranges from 0 to 1. In addition, the correction accuracy of the local triangulation, 2nd-order polynomial, and 3rdorder polynomial models in the other six geometric correction models is better.
3.4 Influence of the number, distribution, and accuracy of GCPs on the geometric correction accuracy
In order to compare the experimental results with each other, the same as in Section 3.3, the UAV Rikola hyperspectral image at 80 m flight altitude AGL and the bilinear interpolation resampling method were used to test the influence of the number, distribution, and accuracy of GCPs on the geometric correction accuracy.
3.4.1 The distribution of GCPs
Some GCPs need to be selected to determine the distribution of GCPs. First, the polynomial correction model was used, and the error contributions of 35 GCPs were calculated. The error contribution plot (Figure 7) shows that the error contribution of 12 points (Al, A2, A3, A4, Аб, A14, A20, АЗО, A32, A33, A34, and A35) is greater than 1. Combined with the number of control points and geometric correction accuracy in Section 3.4.2, 12 GCPs at different locations were selected to test the influence of GCPs distribution on geometric correction accuracy. As shown in Figure 8, the 12 GCPs selected by the error contribution are mainly the points of the image edge and middle position (Distribution 1: edge-middle distribution). In addition, the common distribution form of control points including uniform distribution (Distribution 2), edge distribution (Distribution 3), medial distribution (Distribution 4), and upper part distribution (Distribution 5) were selected in this experiment.
The results are listed in Table 2. The RST, DLT, and RFM models have the highest geometric correction accuracy when the GCPs are uniformly distributed. The local triangulation, 2nd-order polynomial, and 3rd-order polynomial models have the highest geometric correction accuracy when the GCPs are in the image edge and middle position. The RST model is less affected by the distribution of GCPs. The local triangulation model cannot establish a triangular grid for the whole correction image when the GCPs are not distributed around the image, resulting in the deformation or missing phenomenon of the correction image (e.g., Distributions 4 and 5). The DLT model is greatly affected by the distribution of GCPs. When the GCP distribution is not appropriate, it will lead to a serious decline in geometric correction accuracy and cannot meet the mapping requirements. The RPC is less affected by the GCP distribution. However, the geometric correction accuracy of RFM is the worst regardless of the distribution of control points for the UAV Rikola hyperspectral image. The local triangulation, 2ndorder polynomial, and 3rd-order polynomial models have higher correction accuracy than other types of GCP distribution when the GCPs are selected as uniformly distributed as possible (e.g., Distributions 1 and 2). Furthermore, when the geometric correction control points are edge-middle distribution (Distribution 1), the local triangulation models have the highest geometric correction accuracy.
3.4.2 Number of GCPs
Section 3.4.1 shows that when the control points are edgemiddle distributed or uniformly distributed, the geometric correction accuracy is higher. Based on the principle of edge-middle and uniform distribution of control points, 3 control points (A3, Al9, A33), four control points (Al, АЮ, A25, and A33), 6 control points (Al, A3, АЮ, A18, A25, and A33), 12 control points (Al, A3, A5, АЮ, A13, A16, A18, A24, A25, A28, A33, and A35), 18 control points (Al, A3, A4, A5, A6, A10, A11, A13, A16, A18, A24, A23, A25, A27, A28, A33, A34, and A35), and 26 control points (same as in Section 3.3) were selected to test the influence of the number of GCPs on the accuracy of geometric correction.
Table 3 lists that the geometric correction accuracy is improved with the increase in the number of GCPs. However, the geometric correction accuracy decreases when there are 18-26 GCPs in the local triangulation model, 12-18 GCPs in the 2nd-order polynomial, 3rd-order polynomial, and DLT models, and 3 or 4 GCPs in the RPC model. The results indicate that the number of GCPs is not the only factor affecting the geometric correction accuracy.
3.4.3 The accuracy of GCPs
1,3, and 6 control points from 26 uniformly distributed GCPs were randomly selected, and errors (unit: m): (0.1, 0.1), (0.5, 0.5), (1.0, 1.0), (1.5, 1.5), and (2.0, 2.0) were artificially added. The results are listed in Table 4. The accuracy of the GCPs has a significant impact on the geometric correction accuracy. When the error of the GCP is (0.1, 0.1), as the number of error control points increases, the geometric correction accuracy will decrease. When the control point error is large and there are too many error control points, geometric correction cannot be performed. In addition, the accuracy of a single control point significantly influences the DLT model, and the accuracy of GCPs greatly influences the RFM. However, when the error at the level of 10 pixels (0-0.5 m) and the proportion of error control points in the total numbers are not too high, the geometric correction accuracy of the RST, local triangulation, 2nd-order polynomial, and 3rd-order polynomial models will not be significantly affected, which fully meets the mapping requirements.
3.5 Comparison of correction accuracy for UAV hyperspectral images at different flight altitudes
A total of 26 GCPs and nine checkpoints were selected as the same in Section 3.3, and the bilinear interpolation resampling method was used to complete the comparison experiment of hyperspectral image geometric correction accuracy with different altitudes.
Table 5 lists slight differences in image geometric correction accuracy under different altitudes. First, when the flying heights increase from 60 to 100 m, the correction accuracy increases, but the difference in geometric correction accuracy of RST, local triangulation, 2nd-order polynomial, and 3rd-order polynomial models is less than 2 pixels. In addition, the correction accuracy of the RFM is poor, and the change of altitude produces a large difference in correction accuracy (up to hundreds of pixels). Other algorithms have high accuracy in correcting Rikola hyperspectral
images with different flight altitudes and meet the requirements of photogrammetry mapping. There is little difference in the correction accuracy of hyperspectral images with different altitudes after geometric correction using the 4th-order polynomial model. However, as shown in Figures 9e and 9f, the high-order polynomial model will cause serious image deformation. Therefore, the highorder polynomial model is unsuitable for the geometric correction of UAV hyperspectral images.
4 Discussion
4.1 Analysis of the causes of geometric errors in UAV hyperspectral images
The main reasons for geometric error of remote sensing images are the influence of topographic relief, earth curvature, atmospheric refraction, earth rotation, remote sensor working mode, platform attitude of the remote sensor (e.g., yaw, roll, and pitch), and the aircraft's state (e.g., speed change and altitude change)[40]. The flight time of a set of batteries for the DJI Matrice 600 Pro UAV is about 12 min, which makes the cover area of the hyperspectral image small at a time. The farmland is also flat in this study. The weather conditions are good on the day of data acquisition (clear weather, no sand and dust, cloud cover less than 5%, and wind speed less than level 2). The DJI Ronin-MX gimbal was used to stabilize the center point of the hyperspectral image on the route. The single hyperspectral image was not deformed after inspection, and the obtained Rikola hyperspectral image was corrected by its software system. Therefore, the influence of the above geometric errors can be ignored.
As shown in Figure 10, the main geometric error of the mosaic UAV hyperspectral image is the image's overall movement along a certain direction, but the overall movement of this image does not mean that the displacement of each pixel in the image is completely equal. The geometric correction of satellite image also has a similar overall movement phenomenon compared with the original image in the research of Guo[15] and the imaging mode is the push-broom type, but the Rikola hyperspectral imaging mode is the frame type in this study. Therefore, the geometric error of the hyperspectral image caused by the overall movement of the image along a certain direction has no direct relationship with the imaging mode of hyperspectral sensors.
In addition, as shown in Figure 11, the red straight line is the aerial survey planning route of the UAV at 80 m flight altitude AGL for this experiment, and the location of the single hyperspectral image shown at the shooting point on the route is determined by the latitude and longitude information at the center point of the single hyperspectral image. By comparing the pointing of each control point error in Figure 9 and the x-direction error, y-direction error, and RMSE of each control point in Table 4, it can be found that the heading and flight path of the UAV are not directly related to the geometric errors produced for the mosaic UAV hyperspectral image. However, the latitude and longitude information was recorded by the global positioning system (GPS) module of the Rikola hyperspectral imager used in the mosaic hyperspectral images. The GPS module cannot achieve centimeter-level positioning accuracy, leading to a low positioning accuracy of the image center. Therefore, for the Rikola framing hyperspectral spectrometer, without the significant influence of external factors such as weather, it can be tentatively concluded that the geometric error of the mosaic Rikola hyperspectral image is mainly caused by the insufficient GPS accuracy of the center point of the single image obtained by the GPS module. Therefore, the geometric error of the UAV hyperspectral image can be reduced by improving the accuracy of the GPS module.
4.2 Comprehensive analysis for geometric correction results
1) Based on the resampling principle, although the nearest neighbor sampling method does not destroy the gray value of the original pixel and has the high-fidelity pixel value and the simplest calculation, the visual effect is the worst. Both bilinear interpolation and cubic convolution resampling methods destroy the gray value of the original pixel, but the visual effect is the same. The spectral curves extracted from the hyperspectral images after geometric correction by the three resampling methods are not significantly different. Furthermore, considering the influence of the resampling methods on the geometric correction efficiency, the bilinear interpolation method can be used as the best resampling method.
2) Combined with Figure 6, Figure 9, and Table 5, the geometric correction accuracy is less than the 2nd-order polynomial model and the geometric deformation of the image is generated for the 3rd-order polynomial and 4th-order polynomial models. The reason may be that too many undetermined parameters introduced by the 3rd-order polynomial model and 4th-order polynomial model cause parameter correlation during calculation, thus affecting the accuracy of geometric precision correction and causing a different extent of deformation of the correction image. In addition, the polynomial model is a nonlinear transformation without considering the sensor type, so it is often used in practice. The Ist-order polynomial model can correct the geometric deformation of remote sensing images in 6 cases, including translation and scale deformation in x- and y-directions, rotation, and tilt. In this study, the RST and 2nd-order polynomial models also have a good correction effect, which can meet the requirements of mapping and application.
3) The local triangulation model has the most stringent requirements for the distribution of control points, but the geometric correction accuracy is the highest. As shown in Section 3.5.1, the local triangulation model can only perform geometric corrections on areas within control points.
4) In general, the geometric correction effect of the DLT model is inferior to that of the local triangulation and 2nd-order polynomial model. Additionally, the geometric correction accuracy deviation of UAV hyperspectral image using a DLT model is large and is most easily affected by the number, distribution, and accuracy of control points. The phenomenon may be because the geometric correction of UAV hyperspectral images is not an entirely linear expression.
5) The UAV hyperspectral image using the RFM has the worst geometric correction accuracy and cannot meet the mapping requirements. The reason for the results may be that the UAV Rikola hyperspectral images do not have RPC parameter files like satellite remote sensing data[41] so only RPC files generated by known control points can be used. However, there are many RPC parameters, but there are few GCPs in this study, resulting in insufficient accuracy of the RPC file. Therefore, there is a large deviation between the correction result and the actual position when the RFM is used for geometric correction, which seriously affects the accuracy of geometric correction.
6) The geometric correction accuracy of UAV hyperspectral images with different flight altitudes is different, but the difference is relatively small and not proportional to the flight altitude. The number, distribution, and accuracy of GCPs can affect the correction accuracy. Among them, the distribution of GCPs can affect the accuracy greatly. The control point error is too large, or the control points with low accuracy are more directly affecting the correction results. Therefore, it is necessary to eliminate the control points with excessive errors during the precision geometric correction. When there are fewer than 12 GCPs, the accuracy of the geometric correction is poor or even impossible to achieve. The geometric correction accuracy changes little when the number of GCPs is between 12 and 26. Given how time- and labor-consuming it is to collect GCPs, it is best to use the fewest amount possible. In light of the results from Table 2, it can be concluded that 12 is the ideal number of GCPs.
5 Conclusions
In this study, the factors affecting the accuracy of UAV hyperspectral geometric correction (geometric correction model, resampling method, and number, distribution, and accuracy of GCPs) were comprehensively analyzed. The optimal geometric correction model and parameter combination used a local triangulation model, adopted a bilinear interpolation resampling method, and selected 12 edge-middle distributed GCPs (TRMSE=0.0493, t=14.27 s). In addition, considering the efficiency of geometric correction, the optimal geometric correction model and parameter combination used a 2nd-order polynomial model, adopted a bilinear interpolation resampling method, and selected 12 edge-middle distributed GCPs (TRMSE=0.0525, /=9.17 s). In future work, the authors plan to study the geometric correction of UAV hyperspectral images of complex scenes, such as mountains and terraces.
Acknowledgments
The authors acknowledge that this work was financially supported by the National Nature Science Foundation of China (Grant No. 32260388), the Major Scientific and Technological Projects of the XPCC (Grant No. 2017DB005), and the Technology Development Guided by the Central Government (Grant No. 201610011). We also thank Guoshun Zhang and Xiang Long for their assistance in collecting the UAV data and ancillary measurements.
Received date: 2021-10-04 Accepted date: 2022-12-07
Biographies: Wenzhong Tian, PhD candidate, research interest: agricultural informatization and UAV remote sensing, Email: [email protected]; Ping Jiang, Postgraduate, research interest: UAV hyperspectral agricultural remote sensing, Email: [email protected]; Xuewen Wang, Postgraduate, research interest: UAV remote sensing for forestry, Email: [email protected]; Hanging Liu, Postgraduate, research interest: hyperspectral image quality evaluation, Email: [email protected].
Citation: Tian W Z, Kan Z, Zhao Q Z, Jiang P, Wang X W, Liu H Q. Optimal combination of the correction model and parameters for the precision geometric correction of UAV hyperspectral images. Int J Agric & Biol Eng, 2024; 17(3): 173-184.
*Corresponding author: Za Kan, Professor, research interest: agricultural mechanization and automation. College of Mechanical and Electrical Engineering, Shihezi University, Shihezi 832000, Xinjiang, China. Tel: +86-13031321373, Email: [email protected]; Qingzhan Zhao, Professor, research interest: agricultural informatization. College of Information Science and Technology, Shihezi University, Shihezi 832000, Xinjiang, China. Tel: +86-13031331750, Email: [email protected].
[References]
[1] Adão T, Hruška J, Pádua L, Bessa J, Peres E, Morais R, et al. Hyperspectral imaging: A review on UAV-based sensors, data processing and applications for agriculture and forestry. Remote Sensing, 2017; 9(11): 1110.
[2] Liu X M, Wang H C, Cao Y W, Yang Y T, Sun X T, Sun K, et al. Comprehensive growth index monitoring of desert stepp.grassland vegetation based on UAV hyperspectral. Frontiers in Plant Science, 2023; 13: 1050999.
[3] Banerjee B P, Raval S, Cullen P J. UAV-hyperspectral imaging of spectrally complex environments. International Journal of Remote Sensing, 2020; 41(11): 4136-4159.
[4] Aasen H, Honkavaara E, Lucieer A, Zarco-Tejada P J. Quantitative remote sensing at ultra-high resolution with UAV spectroscopy: A review of sensor technology, measurement procedures, and data correction workflows. Remote Sensing, 2018; 10(7): 1091.
[5] Senn J A, Mills J P, Miller P E, Walsh C, Addy S, Loerke E, et al. On-site geometric calibration of thermal and optical sensors for UAS photogrammetry. Remote Sensing and Spatial Information Sciences, 2020; XLIII(Bl): 355-361.
[6] Song L Y, Li H W, Chen T Q, Chen J Y, Liu S, Fan J, et al. An integrated solution of UAV push-broom hyperspectral system based on geometric correction with MSI and radiation correction considering outdoor illumination variation. Remote Sensing, 2022; 14(24): 6267.
[7] Wang T Y, Zhang G, Yu L, Zhao R S, Deng M J, Xu K. Multi-mode GF-3 satellite image geometric accuracy verification using the RPC model. Sensors, 2017; 17(9): 2005.
[8] Ma S B, Yang W F, Pi Y N, Li S H, Xin R F. Research on geometrical calibration precision of GF-2 satellite data based on RFM model. Techniques of Automation and Applications, 2020; 39(9): 57-60. (in Chinese)
[9] Zhong X M. Research on geometric rectification of high resolution remote sensing image in forest areas. Master dissertation. Xi'an: Xi'an University of Science and Technology, 2011; pp.44-51. (in Chinese)
[10] Wang L. Research on the methods of GF-2 satellite remote sensing image geometric correction. Master dissertation. Changchun: Jilin University, 2018; pp.33-41. (in Chinese)
[11] Wang Z W, Yang G D, Zhang X Q, Wang F Y, Bi Y H. A comparative research on the accuracy of different geometric correction methods of Gaofen-6 satellite remote sensing image. World Geology, 2021; 40(1): 125-130, 139. (in Chinese)
[12] Eltohamy F, Hamza E H. Effect of ground control points location and distribution on geometric correction accuracy of remote sensing satellite images. In: 13th International Conference on Aerospace Sciences & Aviation Technology (ASAT-13), Vienna, Austria, 2009; pp.1-14.
[13] Wang J H, Ge Y, Heuvelink G B M, Zhou C H, Brus D. Effect of the sampling design of ground control points on the geometric correction of remotely sensed imagery. International Journal of Applied Earth Observation and Geoinformation, 2012; 18: 91-100.
[14] Babiker MEA, Akhadir SKY. The effect of densification and distribution of control points in the accuracy of geometric correction. International Journal of Scientific Research in Science, Engineering and Technology, 2016; 2(1): 65-70.
[15] Guo C. Research on geometric correction method of Jilin-1 satellite image. Master dissertation. Jilin: Jilin University, 2019; pp.35-45. (in Chinese)
[16] Jiang C S, Zhang X Q, Yang G D, Geng Y D, Liu Z W, Wang T. Geometric correction method test of Jilin 1 satellite image. Geomatics & Spatial Information Technology, 2020; 43(8): 208-211. (in Chinese)
[17] Yang G P. Research and application on geometric correction methods of remote sensing image. Master dissertation. Xi'an: Xi'an University of Architecture and Technology, 2010; pp. 16-19. (in Chinese)
[18] Hruska R, Mitchell J, Anderson M, Glenn N F. Radiometric and geometric analysis of hyperspectral imagery acquired from an unmanned aerial vehicle. Remote Sensing, 2012; 4(9): 2736-2752.
[19] Tawfik M, Elhifnawy H, Ragab A, Hamza E. The effect of image resolution on the geometric correction of remote sensing satellite images. International Journal of Engineering and Applied Sciences, 2017; 4(5): 114-121.
[20] Jiang Y, Li N, Meng L J, Cai H, Gong X M, Zhao H J. Geometric correction method of core hyperspectral data based on error analysis. Infrared and Laser Engineering, 2018; 47(5): 175-182. (in Chinese)
[21] Li C R. UAV remote sensing load comprehensive verification system technology, 1st ed. Beijing: China Science Publishing & Media Ltd., 2014; pp. 164-186.
[22] Mesas-Carrascosa F J, Torres-Sánchez J, Clavero-Rumbao I, Garcia-Ferrer A, Peña J M, Borra-Serrano I, et al. Assessing optimal flight parameters for generating accurate multispectral orthomosaicks by UAV to support site-specific crop management. Remote Sensing, 2015; 7: 12793-12814.
[23] Saillen N, Aballea L, Buhler D, Montrone L, Reyes D N, Francois M, et al. Technological innovation for the ALTIUS atmospheric limb sounding mission: steps towards flight. International Conference on Space Optics - ICSO 2022, Dubrovnik: SPIE 2023; 12777: 210-229.
[24] Tian W Z, Zhao Q Z, Hu H W, Ma Y J, Li P T, Luo X Z. A movable target for ground resolution evaluation of UAV optical load. CN208833469U. 2019.
[25] Zhao X Q, Yang G J, Liu J G, Zhang X Y, Xu B, Wang Y J, et al. Estimation of soybean breeding yield based on optimization of spatial scale of UAV hyperspectral image. Transactions of the CSAE, 2017; 33(1): 110-116. (in Chinese)
[26] Geipel J, Bakken A K, Jorgensen M, Korsaeth A. Forage yield and quality estimation by means of UAV and hyperspectral imaging. Precision Agriculture, 2021; 22: 1437-1463.
[27] Yu R, Luo Y Q, Zhou Q, Zhang X D, Wu D W, Ren L L. A machine learning algorithm to detect pine wilt disease using UAV-based hyperspectral imagery and LiDAR data at the tree level. International Journal of Applied Earth Observation and Geoinformation, 2021; 101(1): 102363.
[28] Tian W Z, Zhao Q Z, Ma Y J, Long X, Wang X W. Flight parameter setting of unmanned aerial vehicle hyperspectral load. Journal of Applied Spectroscopy, 2022; 89(1): 159-169.
[29] Kern J, Schenk A, Hinz S. Radiometric calibration of a UAV-mounted hyperspectral snapshot camera with focus on uniform spectral sampling. In: 2018 9th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, 2018; pp.1-5.
[30] Cao S, Danielson B, Clare S, Koenig S, Campos-Vargas C, Sanchez-Azofeifa A. Radiometric calibration assessments for UAS-borne multispectral cameras: Laboratory and field protocols. ISPRS Journal of Photogrammetry and Remote Sensing, 2019; 149: 132-145.
[31] Poncet A M, Knappenberger T, Brodbeck C, Fogle Jr. M, Shaw J N, Oriz B V. Multispectral UAS data accuracy for different radiometric calibration methods. Remote Sensing, 2019; 11(16): 1917.
[32] Storey J C, Rengarajan R, Choate M J. Bundle adjustment using space-based triangulation method for improving the Landsat global ground reference. Remote Sensing, 2019; 11(14): 1640.
[33] Gholinejada S, Amiri-Simkooei A, Moghaddam SHA, Naeini A A. An automated PCA-based approach towards optimization of the rational function model. ISPRS Journal of Photogrammetry and Remote Sensing, 2020; 165: 133-139.
[34] Inamdar D, Kalacska M, Darko P O, Arroyo-Mora J P, Leblanc G. Spatial response resampling (SR2): Accounting for the spatial point spread function in hyperspectral image resampling. MethodsX, 2023; 10: 101998.
[35] Lyons M B, Keith D A, Phinn S R, Mason T J, Elith J. A comparison of resampling methods for remote sensing classification and accuracy assessment. Remote Sensing of Environment, 2018; 208: 145-153.
[36] Wang X X, Zuo X Q, Yang Z N. Research on realization and application of remote sensing image resampling. Computer Engineering & Software, 2019; 40(7): 42-46. (in Chinese)
[37] Tawfeik M, Elhifnawy H, Hamza E, Shawky A. Determination of suitable requirements for geometric correction of remote sensing satellite images when using ground control points. International Research Journal of Engineering and Technology, 2016; 3(10): 54-62.
[38] CH/Z 3003-2010. Specifications for office operation of low-altitude digital aerophotogrammetry. 2010.
[39] ISO 12233-2000. Photography-electronic still-picture cameras-resolution measurements. 2000.
[40] Wang Y H, Cong Q, Yao S, Jia X Y, Chen J Y, Li S Y. Research on geometric error correction of pushbroom hyperspectral camera carried by UAV. Seventh Symposium on Novel Photoelectronic Detection Technology and Applications, Kunming, China, 2021; 117634G.
[41] Li A, Huang F R, Wen Z. Plane accuracy analysis of multi-source high resolution satellite image. In: 2019 IEEE International Conference on Signal, Information and Data Processing, Chongqing: IEEE, 2019; pp.1-6.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024. This work is published under https://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Nowadays, with the rapid development of quantitative remote sensing represented by high-resolution UAV hyperspectral remote sensing observation technology, people have put forward higher requirements for the rapid preprocessing and geometric correction accuracy of hyperspectral images. The optimal geometric correction model and parameter combination of UAV hyperspectral images need to be determined to reduce unnecessary waste of time in the preprocessing and provide high-precision data support for the application of UAV hyperspectral images. In this study, the geometric correction accuracy under various geometric correction models (including affine transformation model, local triangulation model, polynomial model, direct linear transformation model, and rational function model) and resampling methods (including nearest neighbor resampling method, bilinear interpolation resampling method, and cubic convolution resampling method) were analyzed. Furthermore, the distribution, number, and accuracy of control points were analyzed based on the control variable method, and precise ground control points (GCPs) were analyzed. The results showed that the average geometric positioning error of UAV hyperspectral images (at 80 m altitude AGL) without geometric correction was as high as 3.4041 m (about 65 pixels). The optimal geometric correction model and parameter combination of the UAV hyperspectral image (at 80 m altitude AGL) used a local triangulation model, adopted a bilinear interpolation resampling method, and selected 12 edge-middle distributed GCPs. The correction accuracy could reach 0.0493 m (less than one pixel). This study provides a reference for the geometric correction of UAV hyperspectral images.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 College of Mechanical and Electrical Engineering, Shihezi University, Shihezi 832000, Xinjiang, China; Research Center for Space Information Engineering Technology, Shihezi 832000, Xinjiang, China; XPCC Division of National Remote Sensing Center of China, Shihezi 832000, Xinjiang, China;
2 College of Mechanical and Electrical Engineering, Shihezi University, Shihezi 832000, Xinjiang, China;
3 XPCC Division of National Remote Sensing Center of China, Shihezi 832000, Xinjiang, China; College of Information Science and Technology, Shihezi University, Shihezi 832000, Xinjiang, China)
4 College of Mechanical and Electrical Engineering, Shihezi University, Shihezi 832000, Xinjiang, China; XPCC Division of National Remote Sensing Center of China, Shihezi 832000, Xinjiang, China;