1. Introduction
Urban green space (UGS) is used to describe a range of publicly accessible natural vegetation regions in urban areas, widely applied for recreation [1]. Several scholars have made efforts to measure UGS and quantify its associated benefits [2,3]. Green volume was proposed early for quantification as an ecological indicator of greening in UGS, but its precise definition was lacking in previous periods, resulting in various interpretations in different research fields [4].
During the 1990s, research conducted in UGS led to a gradual consistency of the concept of quantifying greening [5,6]. One widely adopted concept, proposed by Chen [5], focused on quantifying greening based on leaf area index (LAI), which served as an indicator for quantifying greening based on vegetation structure [5]. Subsequent scholars employed instruments such as plant canopy analyzers to measure LAI, facilitating practical research in UGS [7,8,9]. While these instruments provided accurate LAI, they primarily captured the two-dimensional aspect of vegetation structure and did not fully represent the vertical structure of vegetation. To capture the vertical structure of vegetation, researchers proposed another concept of greening named three-dimensional green volume (3DGV) [10]. This indicator aimed to reflect the spatial green volume occupied by growing vegetation, extending the evaluation perspective of UGS from two-dimensional to three-dimensional. It was considered with respect to the vegetation crown volume and estimated by modeling the equations based on crown morphology, crown diameter, and crown height of field survey data [11]. Although the field work for this method was time-consuming and tedious, it has widely gained recognition for its highly accurate assessment of UGS [12,13].
In recent years, advancements in estimation methods for quantifying greening have emerged. Laser scanning technology has been employed to extract high-precision parameters through point cloud data of LiDAR [14,15,16,17,18]. For example, the gap-based method called the Beer–Lambert law was introduced to calculate LAI directly from LiDAR data by analyzing the gap fraction [14]. Li used a high-precision 3D laser scanner in an urban forest to obtain point cloud data, regressed a model, and resulted in a user accuracy of 88.07% [17]; other scholars employed a backpack laser scanner to obtain the parameters of individual trees, resulting in a bias of −3.8% and RMSE of 26% for measurement data [18]. Furthermore, unmanned aerial vehicle (UAV) technology was also applied in estimation due to its lower cost and higher accuracy [19,20,21,22]. The Beer–Lambert law was commonly employed to estimate LAI based on canopy gaps extracted from UAV images with high spatial resolution, and achieved an overall relative error of 27% [19]. UAV equipped with LiDAR sensors or RGB sensors captured point cloud data to calculate canopy height model (CHM), and CHM was often used to estimate 3DGV combined with canopy detection algorithm [20], voxel algorithm [22], or mean of neighboring pixels (MNP) algorithm [21]. The 3DGV estimation based on CHM can achieve excellent results compared to the field measurement data, with a relative bias of 17.31% and a relative RMSE of 19.94% [21]. However, laser scanning and UAV technology have limitations when applied to regions at large scale due to high costs and complex operation. Therefore, satellite images are introduced to apply in large-scale applications. Some researchers also estimated vegetation parameters using the image metrics derived from Sentinel-2 images, including timber volume [23], forest structure characters [24,25], and LAI [26,27]. Normalized Difference Vegetation Index (NDVI) was widely used to construct regression models for LAI estimation [26,27], and Zhang et al. proved that the exponential model of NDVI derived from Sentinel-2 can achieve an R2 of 0.82 [26]. In addition, satellite images were also applied in other applications, such as retrieval of crop biophysical parameters [28], land cover mapping [29], and above-ground biomass estimation [30]. Although satellite images were extensively applied in various fields due to their high revisit frequency and coverage, few studies directly applied them in 3DGV estimation.
Therefore, our study aimed to further explore the method of 3DGV estimation based on satellite images. The primary objectives of this study are as follows: (1) To achieve the retrieval of 3DGV from multi-source remote sensing data; (2) To explore the correlation relationship between 3DGV and LAI, and 3DGV and CHM, respectively; (3) To develop a parametric estimation model to be applied in UGS based on Sentinel-1 and Sentinel-2 images.
2. Materials and Methods
The detailed workflow of 3DGV retrieval in our research is displayed in Figure 1. Firstly, we acquired UAV RGB images and measured vegetation parameters including crown height and diameter. Using the UAV RGB images, we extracted CHM and calculated LAI based on Digital Surface Model (DSM) and Digital Terrain Model (DTM) and the Beer–Lambert law, respectively. 3DGV was estimated by MNP algorithm, the CHM and 3DGV derived from UAV images were assessed by field measurements, and provided the referenced data to our study. Secondly, two backscatter coefficients with polarization modes of Vertical–Vertical (VV) and Vertical–Horizontal (VH), spectral bands, and image metrics were extracted from Sentinel-1 and Sentinel-2 level 1C (L1C). The metrics selected by feature selection were combined with UAV-derived LAI and CHM to estimate satellite-derived LAI and CHM based on exponential model and RF regression model, respectively. Then, the satellite-derived LAI and CHM were employed to combine the UAV-derived 3DGV to construct estimation models of 3DGV based on their correlation relationships. Finally, accuracy of various models was assessed and compared to select the 3DGV estimation model with the highest accuracy, and further evaluated performance of the optimal estimation model by difference analysis and cross-validation using the distribution map of referenced 3DGV.
2.1. A Brief Description of Study Area and Study Sites
Kunming is one of China’s major garden cities; it is known as one of the most livable cities with high green coverage in China. It is located in the subtropical highland monsoon climate zone with low latitudes, which enjoys abundant sunshine, short frost periods, sufficient rainfall, and experiences minimal temperature fluctuations throughout the year. The study sites focus on two specific regions within Kunming City, both exhibiting favorable conditions for plant growth due to ample sunshine and sufficient rainfall, chosen to represent the vegetation distributions in UGS. The first region has a coverage area of 1.12 km2, and is located around YueYaTan Park in Wuhua District (25°05′20″N~25°06′00″N, 102°43′00″E~102°44′0″E). The second region covers an area of 1.45 km2, and surrounds ZhengHe Park in Jinning District (24°39′40″N~24°41′00″N, 102°35′00″E~102°36′30″E) (Figure 2). These regions offer different distributions of vegetation species and density, allowing for the comprehensive analysis of 3DGV in different urban landscapes.
2.2. Data Acquisition and Processing
2.2.1. Sentinel-1 Images
The Sentinel-1 satellite carries C-band Synthetic Aperture Radar (SAR), which is widely used for earth observation and forest source inventory [25]. For SAR Sentinel-1, we collected 1-level ground range detected products (GRD) from Google Earth Engine (GEE) in the interferometric wide (IW) swath mode in descending pass direction. Two backscattering coefficients under two polarization modes of VV and VH from 15 April 2022 to 15 May 2022 were extracted. The backscatter, integral to the radar signal, measures the extent to which a target redirects the radar signal back to the antenna. This measurement reflects the target’s reflective strength. We calculated the median image in this period, and resampled it to 10 m spatial resolution.
2.2.2. Sentinel-2 Images
The combination of Sentinel-1 and Sentinel-2 enables the generation of estimation products of vegetation structure attributes [24,31]. Sentinel-2 satellite carries a bivariate spectral imager (MSI), and it covers 13 spectral bands with spatial resolutions of 10 m, 20 m, and 60 m. The field of view spans 290 mm, with a revisit interval of five days, all conducted under a consistent viewing angle. For Sentinel-2, we collected Sentinel-2 L1C images from 15 April 2022 to 15 May 2022 in the GEE platform. Images were processed for atmospheric correction, cloud masking, and median composition, and then they were resampled to 10 m spatial resolution.
For accurately calculating parameters of vegetation, the extraction of pure vegetated pixels was important. NDVI threshold method was considered to achieve this, which was widely used to extract vegetation area and has been proven to be an effective and rapid method without depending on other a priori information [32,33]. Additionally, the Otsu algorithm, a reliable method for determining the NDVI threshold to extract vegetation [34,35], was used to capture the NDVI threshold in our study. The Otsu algorithm represents an automated and streamlined approach for achieving image segmentation based on clustering. This method efficiently determines the optimal threshold by associating each pixel’s grey value and subsequently evaluating inter-class variance.
In processed Sentinel-2 images of our research, we calculated NDVI in two plots and captured the NDVI threshold using the Otsu algorithm, which was 0.28. We randomly created 2000 sample pixels in two types and validated vegetation extraction accuracy, which showed that the overall accuracy was 91.72% and Kappa coefficient was 0.91. The vegetated areas extracted by NDVI threshold are displayed in Figure 3. All the data used in the study were masked by these vegetated areas, ensuring that only the pure vegetation pixels were included in the analysis.
2.2.3. Acquisition and Preprocessing of UAV Images
The four-rotor DJI Phantom 4 RTK (SZ DJI Technology Co., Shenzhen, China) was employed to collect UAV RGB images on 29 April 2022, with a resolution of 7952 × 5304 pixels in JPG format. GPS location was obtained by using WGS-84 coordinate system. The flight route was defined at the ground station, and the UAV flown at 60 m to provide an image spatial resolution of 0.02 m. In terms of flight direction, the primary orientation encompassed an east-to-west trajectory, while the secondary orientation involved a north-to-south movement. These directions were characterized by forward and side overlaps of 80% and 70%, respectively. UAV was flown in excellent weather conditions, and sustained approximately 30 min to around study sites.
After ortho mosaicking, the aerial photographs yield ortho RGB images in the study sites. Then, the visible light vegetation indices, such as Visible Band Difference Vegetation Index (VDVI), and texture features were used to extract high-resolution vegetation regions. Visible light vegetation indices and texture features were widely used to map land cover based on RGB images, especially extracted vegetation regions [36,37]. We referred to previous studies and applied the Random Forest model [21,36] to classify vegetation and non-vegetation, achieving an accuracy of 93.86% and Kappa coefficient of 0.93.
Subsequently, we defined the fishnet on study sites with a size of 1 m to calculate all UAV-derived parameters. The fishnet was used to calculate the parameters derived from UAV, including vegetation coverage, CHM, and 3DGV. Pix4D desktop was employed to produce point data and generate DSM and DTM. A total of 25 ground control points (GCPs) were set to correct the accuracy of DSM. The median of point in two plots was 12,312.6 and 11,940.9 for matching each image, and the surface density was 56.14/m2 and 61.87/m2, respectively. CHM was calculated by difference of DSM and DTM, and the vegetation coverage was computed by percentage of vegetation pixels in each cell. Then, 3DGV was estimate based on the MNP algorithm [21], and the formulas were as follows:
(1)
(2)
(3)
where G is 3DGV, is cell number, is the total cell number, is pixel area of cell, is the canopy height of cell, is vegetation coverage of cell, is the average DSM of cell, and is the average DTM of cell. is vegetated pixels number of cells, and is total pixel number of cells.The LAI derived from UAV data was calculated based on the Beer–Lambert law, which describes the relationship between LAI and light transmittance through the canopy vegetation. Previous researchers have described this relationship by relating LAI to vegetation coverage [16,19,38]. The calculation formulas were as follows:
(4)
(5)
where the θ is the observation zenith angle, and it is considered to be 0 degrees due to the ortho photo, G(θ) represents the average projected area of foliage per unit area in the plane perpendicular to the measurement direction, and is related to the distribution of leaf angles and is typically assigned a value of 0.5 [38], and Ω(θ) represents the clumping index, which depends on the spatial distribution of leaves.All UAV-derived parameters were estimated as the reference data in 1 m cells. In order to combine with satellite images, we defined a new fishnet with a size of 10 m on orthophotos to register and correct with satellite images at 10 m spatial resolution. The 10 m vegetation coverage and CHM were calculated by the mean vegetation coverage and CHM of all included 1 m cells, respectively, and the 10 m 3DGV was calculated by the sum 3DGV of all included 1 m cells.
2.2.4. Field Measurements
Field measurement was performed on 3 May 2022. We employed a real-time kinematic instrument, the ZHDV200 (RTK, GNSS, Guangzhou Hi-Target Navigation Tech Co., Ltd., Guangzhou, China), to accurately establish the boundaries and coordinates of sample plots. These plots, measuring 10 m × 10 m, were strategically distributed across the study sites, totaling 60 in number. Tree parameters were measured by handheld digitalized bivariate functional forest measurement gun and tape measure [39], including tree height, crown diameter, and first branch height. In addition, according to the main tree species in study sites, we used the empirical formulas in previous studies to calculate 3DGV, which can result in an average relative bias of 16.4% and an average relative RMSE of 12.5% [11]. The empirical formulas of 3DGV in previous studies are listed in Table 1.
The referenced parameters derived from UAV were assessed by field measurements, the mean CHM of plots achieved a bias of 1.34 m (18.34%) and a RMSE of 1.79 m (21.64%), and the mean 3DGV of plots resulted in a relative bias of 107.34 m3 (16.29%) and a relative RMSE of 142.47 m3 (20.09%).
2.3. Calculating LAI Derived from Satellite Images
Previous studies have shown that the non-linear models have better performance than linear models in the estimation of LAI based on Sentinel-2 images [40]. In addition, among the common non-linear models, the exponential model established by NDVI can result in excellent accuracy in LAI estimation [41,42]. We selected this method to estimate satellite-derived LAI based on Sentinel-2 images. The formula of the function was as follows:
(6)
Figure 4 displays the correlation relationship between UAV-derived LAI and NDVI derived from Sentinel-2. They fitted an exponential regression and were validated in validation set with R2 of 0.66, and the regression model was described as in Equation (7).
(7)
2.4. Calculating CHM Derived from Satellite Images
In CHM extraction from satellite images, the Random Forest (RF) regression model was widely used in previous studies [24,31,43]. RF is a machine learning method that constructs independent decision trees iteratively during training [44]. RF can effectively handle a substantial number of predictor variables without falling into the trap of overfitting, and is less susceptible to noise in the training data [45]. We used RF regression model with a decision tree number of 200 to estimate the satellite-derived CHM. A total of 11 spectral bands and 11 vegetation indices were obtained from Sentinel-2, along with two backscatter coefficients from Sentinel-1; their difference and quotient were selected as relevant variables. All variables were evaluated for their importance using RF feature importance assessment, and the correlation between the related variables was calculated by Pearson correlation analysis. This importance score represents the ratio of the average error to the standard deviation derived from the variable predictions across each decision tree in RF algorithm, and Pearson correlation analysis was used to eliminate potential multicollinearity issues between variables. The feature selection is exhibited in Figure 5. When the importance score was higher than 0.5 and the correlation coefficient exceeded 0.7, the feature was retained in this model. The total of 10 selected features are listed in Table 2.
The comparison between satellite-derived CHM and UAV-derived CHM in validation set is shown in Figure 6, which achieved R2 was 0.77. The distribution map of UAV-derived 3DGV, satellite-derived LAI and satellite-derived CHM at 10 m resolution in two study sites is displayed in Figure 7.
2.5. Construction of 3DGV Estimation Models
On the basis of satellite-derived LAI and CHM, univariate and bivariate parametric models were employed to explore the optimal retrieval model. Five strategies were used for univariate estimation models based on LAI or CHM, including linear model, exponential model, power model, logarithmic model, and polynomial model [46]. The variable combination of LAI and CHM was used to construct bivariate models of linear model, exponential model, power model, logarithmic model, and polynomial model [47]. Furthermore, we compared two kinds of univariate models to select two optimal models based on LAI and CHM, respectively. According to the stand-level volume models, we combined the two optimal univariate models of LAI and CHM through multiplication [48], and constructed a compound model to regress 3DGV.
2.6. Accuracy Assessment of Estimation Models
A total of 10,989 samples were randomly partitioned into 7692 training samples and 3297 validation samples at a ratio of 7:3. The constructed estimation models were evaluated using the validation data sets. Pearson correlation coefficient (R) was used to analyze the correlation relationship between 3DGV training set and predictor variables, and four accuracy assessment metrics—root mean square error (RMSE), coefficient of determination (R2), mean absolute error (MAE), and mean prediction error (MPE)—were employed to assess the accuracy of estimation models using the 3DGV validation set. Furthermore, the significance of models was tested at the significance level of 0.05. The formulas for metrics that we used are defined as follows:
(8)
(9)
(10)
(11)
(12)
where is retrieved value of models, is the reference value of UAV data, is the average value of , is covariance, is variance, is the total number of reference data, is the number of parameters in modeling, and is the t-value at confidence level of 0.05.For the selected model comparing all 3DGV estimation models, we employed cross-validation to validate its accuracy in several ranks and generated the confusion matrix [49]. The accuracy of this model was evaluated by using producer’s accuracy (PA), user’s accuracy (UA), and overall accuracy (OA). Furthermore, fraction vegetation coverage (FVC) was also introduced to compare with 3DGV, and it was inverted by NDVI based on the pixel dichotomous model. After validating the accuracy, we extended the model to cover the entire city of Kunming, and compared it with CHM, LAI, and FVC to analyze the effectiveness of our model. We selected the maximum and minimum values of the NDVI in the study area instead of the and values [50]. That is, the NDVI values corresponding to a cumulative frequency of 2% were taken as NDVI soil, and the NDVI values corresponding to a cumulative frequency of 98% were taken as The formula was as follows:
(13)
where is the NDVI value of the pure vegetation pixel and is the NDVI value of the pure non-vegetation pixel.3. Results
3.1. Univariate Estimation Models of 3DGV
All samples (n = 10,989) were used to analyze the correlation relationships between LAI and 3DGV, and CHM and 3DGV; the correlation relationship of LAI and 3DGV (R = 0.71) was stronger than CHM and 3DGV (R = 0.67) (Figure 8). Five 3DGV estimation models (Figure 9) based on LAI and CHM were fitted using the training sets (n = 7692), and we compared their accuracy using the validation sets (n = 3297). The average RMSE of LAI models was 183.72 m3/pixel and CHM models were 235.91 m3/pixel, indicating that LAI performed better than CHM for 3DGV retrieval (Table 3). Among all univariate models, the power model based on LAI (R2 = 0.68, RMSE = 144.92 m3/pixel, AE = 126.81 m3/pixel, MPE = 11.07%, p < 0.05) and the linear model based on CHM (R2 = 0.59, RMSE = 180.68 m3/pixel, AE = 163.25 m3/pixel, MPE = 13.37%, p < 0.05) achieved the highest accuracy. The density scatter graphs (Figure 10) between the estimated 3DGV of two optimal models and reference 3DGV were plotted to analyze the variation of all samples. We found that the scatters of optimal models were concentrated around the reference line, and the variation was mainly distributed in low density areas. The overestimation occurs in the areas with low 3DGV values while the underestimation occurs at the areas with high 3DGV values, and the performance of 3DGV power model based on LAI (Figure 10a) was better than the 3DGV linear model based on CHM (Figure 10b).
3.2. Bivariate Estimation Models of 3DGV
The optimal bivariate estimation model of 3DGV was selected based on accuracy assessments (Table 4). The compound model achieved the highest accuracy (R2 = 0.78, RMSE = 123.36 m3/pixel, AE = 103.98 m3/pixel, MPE = 8.71%, p < 0.05). Note that the average RMSE was 175.02 m3/pixel. Compared to the optimal univariate models, the compound model performed better than all univariate models (Table 3). Except for the bivariate logarithmic model, the accuracy of the bivariate models was consistently higher than all univariate models, which implies that the combination of LAI and CHM can effectively improve the accuracy of 3DGV estimation. In addition, the density scatter graphs between the estimated 3DGV of the compound model and the referenced 3DGV also demonstrated its superiority (Figure 11). Scatters concentrate around the reference line from low 3DGV to high 3DGV, and a slight overestimation occurs in the areas with low 3DGV values while the underestimation occurs in the areas with high 3DGV values (Figure 11). Thus, we selected the compound model to map the 3DGV to observe the spatial distribution details and further verified the mapping accuracy in Section 3.3.
3.3. Validation of Estimated 3DGV
The compound model regressed in Section 3.2 was introduced to estimate 3DGV for entire study sites. The total estimated 3DGV for the Wuhua plot and Jinning plot were 3,697,133.85 m3 and 2,654,830.43 m3, respectively. To evaluate the mapping accuracy of the estimated 3DGV, we divided the estimated 3DGV into four interval ranks for cross-validation; the performance is summarized in the confusion matrix and presented in Table 5. Furthermore, we calculated the sum of reference 3DGV at 10 m resolution, and Figure 12 provides the visual representations of distribution of estimated 3DGV, reference 3DGV, and FVC in the study sites. For comparison, we also mapped the difference between estimated and referenced 3DGV at 10 m resolution.
In Figure 12, the difference distribution map revealed only a few regions where the estimated values deviate either positively or negatively from the reference values. In addition, FVC distribution maps displayed the 3DGV overestimation tends to occur in are-as with lower FVC value, while underestimation tends to occur in areas with higher FVC value. Table 5 presents the cross-validation of the 3DGV estimation model. The model exhibited its best performance in the rank of 0–250 m3/pixel (PA = 79.50%, UA = 77.83%), and the accuracy in the rank of 250–500 m3/pixel (PA = 72.29%, UA = 73.82%) and 500–750 m3/pixel (PA = 71.35%, UA = 72.39%) was relatively similar. Note that the model performed worst in the rank of >750 m3/pixel (PA = 67.07%, UA = 67.19%), and we have identified two potential reasons for the lower accuracy obtained in our results. One reason was the constraints of the estimation model which tended to underestimate the values compared to the reference data; this underestimation can be observed in scatter plots (Figure 11). This bias in estimation may decrease the estimated value in higher 3DGV ranks, and contribute to the lower accuracy. Another reason was the lack of 3DGV samples exceeding 750 m3 in our study sites. The model might not have sufficient training data to accurately estimate 3DGV in higher 3DGV ranks. In summary, although the model can cause minor deviations of estimation, the compound model can effectively capture the distribution pattern to 3DGV in UGS, and it still achieves superior accuracy in 3DGV estimation.
3.4. Spatial Pattern of 3DGV in Kunming City
To verify the applicability of the optimal model at a large scale, this model was used to map 3DGV in Kunming city. Due to the lack of UAV-derived referenced data in whole of Kunming city, we only used the spatial distribution maps of CHM, LAI, and FVC to verify the consistent of 3DGV spatial distribution in Figure 13a–d, and all the values were normalized in advance at 10 m spatial resolution. In addition, we also mapped the retrieved 3DGV in Kunming in Figure 13e. The maximum value of 3DGV is 1851.04 m3/pixel, the minimum value is 57.52 m3/pixel, and the average value is 553.77 m3/pixel. From the spatial distribution pattern of 3DGV, it can be seen that 3DGV was higher in the central and northwest areas, and lower in northeast and southeast areas. When compared to the other characteristics, 3DGV was well consistent with CHM, LAI, and FVC in spatial distribution pattern.
4. Discussion
4.1. Analyzing the Effect of NDVI Saturation and Spatial Resolution of Sentinel Images
3DGV retrieval by satellite-derived LAI and CHM can achieve good accuracy, but some inevitable effects were caused by various sources. To analyze these effects, we calculated the sum of UAV-derived 3DGV at 10 m resolution, and randomly selected a total of 1000 samples within four ranks based on the sum of UAV-derived 3DGV. Figure 14a illustrates the UAV-derived and satellite-derived 3DGV of samples, respectively. It can be observed that in the rank of 0–250 m3/pixel, our model tends to overestimate, but the model tends to underestimate with the increase of 3DGV values, especially in the rank of >750 m3/pixel. This change was also analyzed from our results. We considered that this might be due to the saturation of NDVI, so we used UAV-derived LAI as an independent variable and NDVI as the dependent variable to explore this saturation issue based on a quadratic function. The independent variable corresponding to the extreme value of the function in the interval is the LAI saturation point [50]. As shown in Figure 14, LAI and 3DGV fit a functional relationship with a R2 of 0.57, and the NDVI was 0.66, reaching saturation point when LAI was 3.63. Although some scholars have shown the correlation relationship of them [41,42], it would also influence the 3DGV value to be underestimated in high 3DGV, especially in areas with dense vegetation.
In addition, 3DGV estimation in our research was conducted at a 10 m spatial resolution based on satellite images, but the reference 3DGV was computed at a higher spatial resolution of 1 m. We founded that the estimation realized based on satellite images would cause underestimation. To analyze these effects, the image details of different spatial resolutions were displayed in Figure 15, in order to describe the limitations of satellite-derived 3DGV compared to UAV-derived 3DGV. From the comparisons, the UAV-derived 3DGV accurately displayed the detailed vegetation information in the selected regions (Figure 15b), but the satellite-derived 3DGV exhibited some limitations in capturing detailed vegetation information (Figure 15a). The pixel coverage of satellite-derived 3DGV is insufficient in sparse areas, such as the roadside trees or individual potted plants. Although the inaccurate vegetation extraction might contribute to this effect, we believed that this insufficiency is due to the satellite images being unable to capture the smaller-scale vegetation elements. These limitations highlighted the challenges in estimating regional 3DGV, and except for the model’s estimation bias, there was an additional concern of vegetation information gaps.
4.2. Predictor Variables Selection
In terms of predictor variables, 3DGV is a parameter which reflects vegetation spatial structure, and the estimation of vegetation structure characteristics is typically based on various biophysical parameters, such as vegetation cover and biomass [31,51,52]. We used two variables that also reflect vegetation structure for estimation, which are relatively easier to obtain compared to the vegetation cover and biomass [53,54]. LAI is a widely accepted indicator to evaluate the ecological quality of UGS, as it is closely related to plant growth, biomass, and photosynthetic activity [55]. On the other hand, CHM provides valuable information about vegetation structure and directly reflects the vertical structure of ecosystems [56]. Another reason is that previous studies have estimated similar indicators to 3DGV, such as timber volume and forest volume, which are calculated by combining area-based indicators with tree height measurements, such as basal area [48,57,58]. Taking inspiration from these studies, we selected LAI to replace the basal area as the area-based indicator and combined it with CHM for the 3DGV estimation.
In future research, the 3DGV estimation model w extended to larger scales. We will consider incorporating other remote sensing metrics to construct the estimation model, such as spectral bands [25], vegetation indices, and texture features [59], in order to further improve the accuracy of 3DGV estimation. Furthermore, we will collect much measured data and enrich different vegetation samples to enhance the applicability of the estimation model for different seasons, different areas, and different development levels of UGS in cities.
4.3. Limitations and Strengths
Many researchers have been pursuing various methods for estimating 3DGV. In comparison to previous studies, our approach has made several advancements. For the estimation method, we did not use the MNP algorithm that was applied in UAV-derived 3DGV to apply in satellite-derived estimation. The MNP algorithm considers the pixels from high resolution images as voxels to estimate their volume [21], but the spatial resolution of satellite images is too low to further calculate the precise parameters in pixels. Our research used parametric models to retrieve 3DGV, which can result in similar accuracy compared to previous research studies with non-parametric models (relative error = 20.9% [60], estimation accuracy = 88.07% [17]). This level of consistency shows that the method of parametric models is feasible for 3DGV estimation. Furthermore, our method is also close to the accuracy of regional 3DGV estimation using LiDAR data (overall accuracy = 85% [61]), which can support their DGV estimation results based on SPOT5. Moreover, our results illustrate that the physical parameters of LAI and CHM, as well as spectral index and texture index, are superior for volume estimation (rRMSE = 61.42% [59]), and further demonstrated feasibility and potential for 3DGV estimation based on Sentinel-1 and Sentinel-2.
However, this study still has several limitations for improvement. Firstly, the predictor variables of satellite-derived LAI and CHM were both the estimated value, without field measurement. Although previous studies assessed the LAI accuracy of the exponential model constructed by NDVI [26], and the CHM accuracy of RF model established by Sentinel-1 and Sentinel-2 images [25], there may still be some biases from our reference data in which R2 was 0.66 and 0.77, respectively. Secondly, the reference 3DGV was not the most accurate representation. The 3DGVs used in this research were calculated based on UAV RGB images, the accuracy of which were assessed in our previous study [21]. The MNP algorithm was overestimated compared to the measured data (Bias = 15.18%, RMSE = 19.63%, R2 = 0.96). This overestimation was caused due to only the higher parts of the tree were calculated, resulting in the lower parts of the crown not being accurately considered. Lastly, our estimation model was constructed based on the overall vegetation distribution of UGS, but there was various vegetation. As a result, our model may not be applicable to all kinds of vegetation in UGS.
5. Conclusions
In summary, this study focused on 3DGV retrieval in UGS, and developed a parametric 3DGV estimation model by incorporating LAI and CHM derived from Sentinel-1 and Sentinel-2. Satellite-derived LAI and CHM revealed strong power and linear relationships to 3DGV derived from UAV images, respectively. The optimal univariate model of 3DGV was constructed based on LAI, which was regressed by power model and achieved an excellent accuracy (71.4·LAI1.55 − 16.09, R2 = 0.68, RMSE = 144.92 m3/pixel, AE = 126.81 m3/pixel, MPE = 11.07%, p < 0.05). The optimal bivariate model was a compound model that combines the power model of LAI and linear model of CHM, which can result in the highest accuracy (37.13·LAI−0.3·CHM + 38.62·LAI1.8 + 13.8, R2 = 0.78, RMSE = 124.36 m3/pixel, AE = 103.98 m3/pixel, MPE = 8.71%, p < 0.05). The 3DGV of the optimal estimation model was found to achieve a good overall accuracy in study sites, and we concluded its spatial pattern was well consistent with CHM, LAI, and FVC within Kunming city. This result indicated that the ability of this 3DGV estimation model was suitable to apply in UGS, but it remains limited in that it performed better in vegetation with lower 3DGV and produced underestimation in vegetation with higher 3DGV. Our study developed a parametric 3DGV estimation model based on Sentinel-1 and Sentinel-2 images, which demonstrate the potential of extending 3DGV retrieval in UGS.
Z.H.: Methodology, Writing—Original Draft Preparation. W.X.: Conceptualization, Writing—Reviewing and Editing. Y.L.: Data curation, Formal Analysis. L.W.: Software, Visualization. G.O. and Q.D.: Software, Validation. N.L.: Investigation. All authors have read and agreed to the published version of the manuscript.
Data are contained within the article.
We thank the anonymous reviewers for their constructive comments on the earlier version of the manuscript.
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 2. Location of study area and study sites: (a) study area map, which is a true color composite created from Sentinel-2 images (B4 in Red, B3 in Green, B2 in Blue); (b) Wuhua plot, derived from UAV; (c) Jinning plot, derived from UAV.
Figure 3. Classification of vegetation and non-vegetation in two study sites: (a) in Wuhua plot; (b) in Jinning plot.
Figure 4. The correlation relationship of UAV-derived LAI and NDVI derived from Sentinel-2.
Figure 5. The feature selection in RF model: (a) importance score ranking; (b) correlation analysis between variables.
Figure 7. The distribution of satellite-derived LAI and CHM, and UAV-derived 3DGV at 10 m resolution: (a) distribution in Wuhua plot; (b) distribution in Jinning plot.
Figure 8. The correlation relationship of 3DGV and variables: (a) 3DGV and LAI; (b) 3DGV and CHM.
Figure 9. The fitting results of 3DGV estimation models using training set: (a) estimation models of LAI; (b) estimation models of CHM.
Figure 10. Comparisons of estimated 3DGV to reference 3DGV and density of all reference 3DGV data based on the optimal univariate estimation models: (a) LAI power model; (b) CHM linear model.
Figure 11. Comparisons of estimated 3DGV and reference 3DGV and density of all reference 3DGV data based on the optimal estimation model.
Figure 12. The distribution maps of estimated 3DGV, referenced 3DGV, 3DGV difference, and FVC in plots: (a) Wuhua plot; (b) Jinning plot.
Figure 13. Spatial distribution pattern of characteristics in Kunming city: (a) CHM normalization map; (b) LAI normalization map; (c) FVC distribution map; (d) 3DGV normalization map; (e) 3DGV distribution map.
Figure 14. The effect of NDVI saturation: (a) distribution of 3DGV in different 3DGV ranks; (b) quadratic function curve of UAV-derived LAI and NDVI.
Figure 15. The effect of different spatial resolutions: (a) 3DGV at 10 m resolution; (b) 3DGV at 1 m resolution; (c) UAV RGB images.
3DGV empirical formulas of various vegetation species.
Tree Species | Geometrical Morphology | Calculation Formula | Description |
---|---|---|---|
Metasequoia glyptostroboides Hu and W. | cone |
|
|
Salix babylonica L. | ovoid |
|
|
Elaeis guineensis Jacq. | |||
Osmanthus fragrans Makino. | sphere |
|
|
Cinnamomum japonicum Sieb. | |||
Ficus microcarpa L.f. | |||
Elaeocarpus decipiens Linn. | flabellate |
|
|
Cycas revoluta Thunb. |
The variables used in satellite-derived CHM modeling.
Variable | Formula | Explanation | Attribute |
---|---|---|---|
VH |
|
Backscatter coefficient of VV (Vertical–Vertical) polarization modes | σ represents the backscatter coefficient after the projection angle correction, sigma represents the radar brightness value. α represents the projection angle. |
VV | Backscatter coefficient of VH (Vertical–Horizontal) polarization modes | ||
VV/VH | |||
LSWI |
|
Land surface water index | B8 is NIR (Wavelength = 842 nm), B4 is red band (Wavelength = 665 nm), B3 is green band (Wavelength = 560 nm) |
EVI |
|
Enhanced vegetation index | |
B2 | NIR (Wavelength = 705 nm) | ||
B6 | NIR (Wavelength = 705 nm) | ||
B8 | NIR (Wavelength = 842 nm) | ||
B8A | NIR (Wavelength = 865 nm) | ||
B11 | SWIR (Wavelength = 1610 nm) |
Accuracy assessment of 3DGV estimation using validation set based on univariate models.
Regression Model | Formula | R2 | RMSE (m3/Pixel) | AE (m3/Pixel) | MPE (%) | p-Value | |
---|---|---|---|---|---|---|---|
3DGV models based on LAI | Linear model |
|
0.61 | 168.89 | 151.73 | 12.45 | <0.05 |
Exponential model |
|
0.67 | 146.17 | 129.94 | 11.43 | <0.05 | |
Power model |
|
0.68 | 144.92 | 126.81 | 11.07 | <0.05 | |
Logarithmic model |
|
0.36 | 313.26 | 276.14 | 24.08 | >0.05 | |
Polynomial model |
|
0.67 | 145.34 | 128.75 | 11.21 | <0.05 | |
3DGV models based on CHM | Linear model |
|
0.59 | 180.68 | 163.25 | 13.37 | <0.05 |
Exponential model |
|
0.43 | 256.45 | 234.71 | 19.71 | >0.05 | |
Power model |
|
0.49 | 234.46 | 217.04 | 17.82 | >0.05 | |
Logarithmic model |
|
0.4 | 273.75 | 258.31 | 20.65 | >0.05 | |
Polynomial model |
|
0.51 | 234.19 | 206.58 | 17.16 | <0.05 |
Accuracy assessment of 3DGV estimation using validation set based on bivariate models.
Regression Model | Formula | R2 | RMSE (m3/Pixel) | AE (m3/Pixel) | MPE (%) | p-Value | |
---|---|---|---|---|---|---|---|
Bivariate models | Linear model |
|
0.68 | 142.78 | 127.59 | 10.96 | <0.05 |
Exponential model |
|
0.76 | 126.17 | 109.94 | 8.94 | <0.05 | |
Power model |
|
0.77 | 124.92 | 106.81 | 8.83 | <0.05 | |
Logarithmic model |
|
0.53 | 227.94 | 199.83 | 16.45 | <0.05 | |
Polynomial model |
|
0.77 | 130.94 | 106.71 | 9.19 | <0.05 | |
Compound model |
|
0.78 | 122.36 | 103.98 | 8.71 | <0.05 |
The confusion matrix of estimated 3DGV.
3DGV | 0–250 m3/Pixel | 250–500 m3/Pixel | 500–750 m3/Pixel | >750 m3/Pixel | Total |
---|---|---|---|---|---|
0–250 m3/pixel | 3936 | 852 | 163 | 0 | 4951 |
250–500 m3/pixel | 972 | 2992 | 110 | 65 | 4139 |
500–750 m3/pixel | 132 | 124 | 944 | 123 | 1323 |
>750 m3/pixel | 17 | 85 | 89 | 385 | 576 |
Total | 5057 | 4053 | 1306 | 573 | 10,989 |
PA/% | 79.50 | 72.29 | 71.35 | 67.07 | |
UA/% | 77.83 | 73.82 | 72.39 | 67.19 | |
OA/% | 75.15 |
References
1. Nath, T.K.; Han, S.S.Z.; Lechner, A.M. Urban green space and well-being in Kuala Lumpur, Malaysia. Urban For. Urban Green.; 2018; 36, pp. 34-41. [DOI: https://dx.doi.org/10.1016/j.ufug.2018.09.013]
2. Dobbs, C.; Kendal, D.; Nitschke, C. The effects of land tenure and land use on the urban forest structure and composition of Melbourne. Urban For. Urban Green.; 2013; 12, pp. 417-425. [DOI: https://dx.doi.org/10.1016/j.ufug.2013.06.006]
3. Wolch, J.R.; Byrne, J.; Newell, J.P. Urban green space, public health, and environmental justice: The challenge of making cities ‘just green enough’. Landsc. Urban Plan.; 2014; 125, pp. 234-244. [DOI: https://dx.doi.org/10.1016/j.landurbplan.2014.01.017]
4. Wang, T.; Yang, X.; Hu, S.; Shi, H. Comparisons of methods measuring green quantity. China Acad. J. Electron. Publ. House; 2010; 8, pp. 36-38.
5. Chen, Z. Research on the ecological benefits of urban landscaping in Beijing (2). China Gard.; 1998; 14, pp. 51-54.
6. Zhou, J.H.; Sun, T.Z. Study on remote sensing model of three-dimensional green biomass and the estimation of environmental benefits of greenery. Remote Sens. Environ. China; 1995; 3, pp. 162-174.
7. Song, Z.; Guo, X.; Ma, W. Study on green quantity of green space along road in Beijing plain area. Jilin For. Sci. Technol.; 2008; 37, pp. 11-15.
8. Chen, F.; Zhou, Z.X.; Xiao, R.B.; Wang, P.C.; Li, H.F.; Guo, E.X. Estimation of ecosystem services of urban green-land in industrial areas: A case study on green-land in the workshop area of the Wuhan Iron and Steel Company. Acta Ecol. Sin.; 2006; 26, pp. 2230-2236.
9. Shen, X.y.; Li, Z.d. Review of researches on the leaf area index of landscape plants. Jilin For. Sci. Technol.; 2007; 36, pp. 18-22.
10. Zhou, J.H. Research on the green quantity group of urban living environment (5)—Research on greening 3D volume and its application. China Gard.; 1998; 14, pp. 61-63.
11. Zhou, T.; Luo, H.; Guo, D. Remote sensing image based quantitative study on urban spatial 3D Green Quantity Virescence three dimension quantity. Acta Ecol. Sin.; 2005; 25, pp. 415-420.
12. Zhou, Y.; Zhou, J. Fast method to detect and calculate LVV. Acta Ecol. Sin. Pap.; 2006; 26, pp. 4204-4211.
13. Liu, C.; Li, L.; Zhao, G. Vertical Distribution of Tridimensional Green Biomass in Shenyang Urban Forests. J. Northeast. For. Univ.; 2008; 36, 18.
14. Zheng, G.; Moskal, L.M. Computational-Geometry-Based Retrieval of Effective Leaf Area Index Using Terrestrial Laser Scanning. IEEE Trans. Geosci. Remote Sens.; 2012; 50, pp. 3958-3969. [DOI: https://dx.doi.org/10.1109/TGRS.2012.2187907]
15. Ma, H.; Song, J.; Wang, J.; Xiao, Z.; Fu, Z. Improvement of spatially continuous forest LAI retrieval by integration of discrete airborne LiDAR and remote sensing multi-angle optical data. Agric. For. Meteorol.; 2014; 189–190, pp. 60-70. [DOI: https://dx.doi.org/10.1016/j.agrformet.2014.01.009]
16. Liu, Q.; Cai, E.; Zhang, J.; Song, Q.; Li, X.; Dou, B. A Modification of the Finite-length Averaging Method in Measuring Leaf Area Index in Field. Chin. Bull. Bot.; 2018; 53, pp. 671-685. [DOI: https://dx.doi.org/10.11983/CBB17083]
17. Li, F.; Li, M.; Feng, X.-g. High-Precision Method for Estimating the Three-Dimensional Green Quantity of an Urban Forest. J. Indian Soc. Remote Sens.; 2021; 49, pp. 1407-1417. [DOI: https://dx.doi.org/10.1007/s12524-021-01316-7]
18. Hyyppä, E.; Kukko, A.; Kaijaluoto, R.; White, J.C.; Wulder, M.A.; Pyörälä, J.; Liang, X.; Yu, X.; Wang, Y.; Kaartinen, H. Accurate derivation of stem curve and volume using backpack mobile laser scanning. ISPRS J. Photogramm. Remote Sens.; 2020; 161, pp. 246-262. [DOI: https://dx.doi.org/10.1016/j.isprsjprs.2020.01.018]
19. Sun, Y.; GU, Z.; Li, D. Study on remote sensing retrieval of leaf area index based on unmanned aerial vehicle and satellite image. Sci. Surv. Mapp.; 2021; 46, pp. 106-112.
20. Zhou, X.; Liao, H.; Cui, Y.; Wang, F. UAV remote sensing estimation of three-dimensional green volume in landscaping: A case study in the Qishang campus of Fuzhou university. J. Fuzhou Univ.; 2020; 48, pp. 699-705.
21. Hong, Z.; Xu, W.; Liu, Y.; Wang, L.; Ou, G.; Lu, N.; Dai, Q. Estimation of the Three-Dimension Green Volume Based on UAV RGB Images: A Case Study in YueYaTan Park in Kunming, China. Forests; 2023; 14, 752. [DOI: https://dx.doi.org/10.3390/f14040752]
22. Zheng, S.; Meng, C.; Xue, J.; Wu, Y.; Liang, J.; Xin, L.; Zhang, L. UAV-based spatial pattern of three-dimensional green volume and its influencing factors in Lingang New City in Shanghai, China. Front. Earth Sci.; 2021; 15, pp. 543-552. [DOI: https://dx.doi.org/10.1007/s11707-021-0896-7]
23. Schumacher, J.; Rattay, M.; Kirchhöfer, M.; Adler, P.; Kändler, G. Combination of Multi-Temporal Sentinel 2 Images and Aerial Image Based Canopy Height Models for Timber Volume Modelling. Forests; 2019; 10, 746. [DOI: https://dx.doi.org/10.3390/f10090746]
24. Silveira, E.M.O.; Radeloff, V.C.; Martinuzzi, S.; Pastur, G.J.M.; Bono, J.; Politi, N.; Lizarraga, L.; Rivera, L.O.; Ciuffoli, L.; Rosas, Y.M. Nationwide native forest structure maps for Argentina based on forest inventory data, SAR Sentinel-1 and vegetation metrics from Sentinel-2 imagery. Remote Sens. Environ.; 2023; 285, 113391. [DOI: https://dx.doi.org/10.1016/j.rse.2022.113391]
25. Kacic, P.; Thonfeld, F.; Gessner, U.; Kuenzer, C. Forest Structure Characterization in Germany: Novel Products and Analysis Based on GEDI, Sentinel-1 and Sentinel-2 Data. Remote Sens.; 2023; 15, 1969. [DOI: https://dx.doi.org/10.3390/rs15081969]
26. Zhang, X.; Song, P. Estimating Urban Evapotranspiration at 10m Resolution Using Vegetation Information from Sentinel-2: A Case Study for the Beijing Sponge City. Remote Sens.; 2021; 13, 2048. [DOI: https://dx.doi.org/10.3390/rs13112048]
27. Mannschatz, T.; Pflug, B.; Borg, E.; Feger, K.H.; Dietrich, P. Uncertainties of LAI estimation from satellite imaging due to atmospheric correction. Remote Sens. Environ.; 2014; 153, pp. 24-39. [DOI: https://dx.doi.org/10.1016/j.rse.2014.07.020]
28. Xie, Q.; Dash, J.; Huete, A.; Jiang, A.; Yin, G.; Ding, Y.; Peng, D.; Hall, C.C.; Brown, L.; Shi, Y. et al. Retrieval of crop biophysical parameters from Sentinel-2 remote sensing imagery. Int. J. Appl. Earth Obs. Geoinf.; 2019; 80, pp. 187-195. [DOI: https://dx.doi.org/10.1016/j.jag.2019.04.019]
29. Abdi, A.M. Land cover and land use classification performance of machine learning algorithms in a boreal landscape using Sentinel-2 data. GIScience Remote Sens.; 2020; 57, pp. 1-20. [DOI: https://dx.doi.org/10.1080/15481603.2019.1650447]
30. Meng, B.; Liang, T.; Yi, S.; Yin, J.; Cui, X.; Ge, J.; Hou, M.; Lv, Y.; Sun, Y. Modeling Alpine Grassland Above Ground Biomass Based on Remote Sensing Data and Machine Learning Algorithm: A Case Study in East of the Tibetan Plateau, China. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.; 2020; 13, pp. 2986-2995. [DOI: https://dx.doi.org/10.1109/JSTARS.2020.2999348]
31. Kacic, P.; Hirner, A.; Da Ponte, E. Fusing Sentinel-1 and -2 to Model GEDI-Derived Vegetation Structure Characteristics in GEE for the Paraguayan Chaco. Remote Sens.; 2021; 13, 5105. [DOI: https://dx.doi.org/10.3390/rs13245105]
32. Aryal, J.; Sitaula, C.; Aryal, S. NDVI Threshold-Based Urban Green Space Mapping from Sentinel-2A at the Local Governmental Area (LGA) Level of Victoria, Australia. Land; 2022; 11, 351. [DOI: https://dx.doi.org/10.3390/land11030351]
33. Hashim, H.; Abd Latif, Z.; Adnan, N.A. Urban vegetation classification with NDVI threshold value method with very high resolution (VHR) Pleiades imagery. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.; 2019; 42, pp. 237-240. [DOI: https://dx.doi.org/10.5194/isprs-archives-XLII-4-W16-237-2019]
34. Karimulla, S.; Ravi Raja, A. Tree Crown Delineation from High Resolution Satellite Images. Indian J. Sci. Technol.; 2016; 9, S1. [DOI: https://dx.doi.org/10.17485/ijst/2016/v9iS1/107913]
35. Srinivas, C.; Prasad, M.; Sirisha, M. Remote sensing image segmentation using OTSU algorithm. Int. J. Comput. Appl.; 2019; 975, 8887.
36. Feng, Q.; Liu, J.; Gong, J. UAV Remote Sensing for Urban Vegetation Mapping Using Random Forest and Texture Analysis. Remote Sens.; 2015; 7, pp. 1074-1094. [DOI: https://dx.doi.org/10.3390/rs70101074]
37. Wang, X.; Wang, M.; Wang, S.; Wu, Y. Extraction of vegetation information from visible unmanned aerial vehicle images. Nongye Gongcheng Xuebao/Trans. Chin. Soc. Agric. Eng.; 2015; 31, pp. 152-159. [DOI: https://dx.doi.org/10.3969/j.issn.1002-6819.2015.05.022]
38. Chu, H.; Xiao, Q.; Bai, J. The Retrieval of Leaf Area Index based on Remote Sensing by Unmanned Aerial Vehicle. Remote Sens. Technol. Appl.; 2017; 32, pp. 141-147.
39. Xu, W.; Feng, Z.; Su, Z.; Xu, H.; Jiao, Y.; Fan, J. Development and experiment of handheld digitalized and multi-functional forest measurement gun. Trans. Chin. Soc. Agric. Eng.; 2013; 29, pp. 90-99.
40. Cañete-Salinas, P.; Zamudio, F.; Yáñez, M.; Gajardo, J.; Valdés, H.; Espinosa, C.; Venegas, J.; Retamal, L.; Ortega-Farias, S.; Acevedo-Opazo, C. Evaluation of models to determine LAI on poplar stands using spectral indices from Sentinel-2 satellite images. Ecol. Model.; 2020; 428, 109058. [DOI: https://dx.doi.org/10.1016/j.ecolmodel.2020.109058]
41. Verrelst, J.; Rivera, J.P.; Veroustraete, F.; Muñoz-Marí, J.; Clevers, J.G.P.W.; Camps-Valls, G.; Moreno, J. Experimental Sentinel-2 LAI estimation using parametric, non-parametric and physical retrieval methods—A comparison. ISPRS J. Photogramm. Remote Sens.; 2015; 108, pp. 260-272. [DOI: https://dx.doi.org/10.1016/j.isprsjprs.2015.04.013]
42. Wang, J.; Xiao, X.; Bajgain, R.; Starks, P.; Steiner, J.; Doughty, R.B.; Chang, Q. Estimating leaf area index and aboveground biomass of grazing pastures using Sentinel-1, Sentinel-2 and Landsat images. ISPRS J. Photogramm. Remote Sens.; 2019; 154, pp. 189-201. [DOI: https://dx.doi.org/10.1016/j.isprsjprs.2019.06.007]
43. Chen, Y.; Zhang, X.; Gao, X.; Gao, j. Estimating average tree height in Xixiaoshan Forest Farm, Northeast China based on Sentinel-1 with Sentinel-2A data. Chin. J. Appl. Ecol.; 2021; 32, pp. 2839-2846.
44. Breiman, L. Random Forests. Mach. Learn.; 2001; 45, pp. 5-32. [DOI: https://dx.doi.org/10.1023/A:1010933404324]
45. Shataee, S.; Kalbi, S.; Fallah, A.; Pelz, D. Forest attribute imputation using machine-learning methods and ASTER data: Comparison of k-NN, SVR and random forest regression algorithms. Int. J. Remote Sens.; 2012; 33, pp. 6254-6280. [DOI: https://dx.doi.org/10.1080/01431161.2012.682661]
46. Lyu, X.; Li, X.; Gong, J.; Li, S.; Dou, H.; Dang, D.; Xuan, X.; Wang, H. Remote-sensing inversion method for aboveground biomass of typical steppe in Inner Mongolia, China. Ecol. Indic.; 2021; 120, 106883. [DOI: https://dx.doi.org/10.1016/j.ecolind.2020.106883]
47. Zhang, R.P.; Zhou, J.H.; Guo, J.; Miao, Y.H.; Zhang, L.L. Inversion models of aboveground grassland biomass in Xinjiang based on multisource data. Front. Plant Sci.; 2023; 14, 1152432. [DOI: https://dx.doi.org/10.3389/fpls.2023.1152432]
48. Zeng, W.; Yang, X.; Chen, X. Comparison on Prediction Precision of One-variable and Two-variable Volume Modelson Tree-leveland Stand-level. Cent. South For. Inventory Plan.; 2017; 36, pp. 1-6.
49. Lin, S.; Zhang, H.; Liu, S.; Gao, G.; Li, L.; Huang, H. Characterizing Post-Fire Forest Structure Recovery in the Great Xing’an Mountain Using GEDI and Time Series Landsat Data. Remote Sens.; 2023; 15, 3107. [DOI: https://dx.doi.org/10.3390/rs15123107]
50. Liu, Y.; Xu, W.; Hong, Z.; Wang, L.; Ou, G.; Lu, N.; Dai, Q. Integrating three-dimensional greenness into RSEI improved the scientificity of ecological environment quality assessment for forest. Ecol. Indic.; 2023; 156, 111092. [DOI: https://dx.doi.org/10.1016/j.ecolind.2023.111092]
51. Potapov, P.; Tyukavina, A.; Turubanova, S.; Talero, Y.; Hernandez-Serna, A.; Hansen, M.C.; Saah, D.; Tenneson, K.; Poortinga, A.; Aekakkararungroj, A. et al. Annual continuous fields of woody vegetation structure in the Lower Mekong region from 2000–2017 Landsat time-series. Remote Sens. Environ.; 2019; 232, 111278. [DOI: https://dx.doi.org/10.1016/j.rse.2019.111278]
52. Brede, B.; Verrelst, J.; Gastellu-Etchegorry, J.P.; Clevers, J.; Goudzwaard, L.; den Ouden, J.; Verbesselt, J.; Herold, M. Assessment of Workflow Feature Selection on Forest LAI Prediction with Sentinel-2A MSI, Landsat 7 ETM+ and Landsat 8 OLI. Remote Sens.; 2020; 12, 915. [DOI: https://dx.doi.org/10.3390/rs12060915]
53. Liu, Z.; Jin, G. Improving accuracy of optical methods in estimating leaf area index through empirical regression models in multiple forest types. Trees; 2016; 30, pp. 2101-2115. [DOI: https://dx.doi.org/10.1007/s00468-016-1437-y]
54. Liu, X.; Su, Y.; Hu, T.; Yang, Q.; Liu, B.; Deng, Y.; Tang, H.; Tang, Z.; Fang, J.; Guo, Q. Neural network guided interpolation for mapping canopy height of China’s forests by integrating GEDI and ICESat-2 data. Remote Sens. Environ.; 2022; 269, 112844. [DOI: https://dx.doi.org/10.1016/j.rse.2021.112844]
55. Fang, H.; Baret, F.; Plummer, S.; Schaepman-Strub, G. An Overview of Global Leaf Area Index (LAI): Methods, Products, Validation, and Applications. Rev. Geophys.; 2019; 57, pp. 739-799. [DOI: https://dx.doi.org/10.1029/2018RG000608]
56. Xu, C.; Hantson, S.; Holmgren, M.; van Nes, E.H.; Staal, A.; Scheffer, M. Remotely sensed canopy height reveals three pantropical ecosystem states. Ecology; 2016; 97, pp. 2518-2521. [DOI: https://dx.doi.org/10.1002/ecy.1470]
57. Hill, A.; Breschan, J.; Mandallaz, D. Accuracy Assessment of Timber Volume Maps Using Forest Inventory Data and LiDAR Canopy Height Models. Forests; 2014; 5, pp. 2253-2275. [DOI: https://dx.doi.org/10.3390/f5092253]
58. Tonolli, S.; Dalponte, M.; Neteler, M.; Rodeghiero, M.; Vescovo, L.; Gianelle, D. Fusion of airborne LiDAR and satellite multispectral data for the estimation of timber volume in the Southern Alps. Remote Sens. Environ.; 2011; 115, pp. 2486-2498. [DOI: https://dx.doi.org/10.1016/j.rse.2011.05.009]
59. Fang, G.; He, X.; Weng, Y.; Fang, L. Texture Features Derived from Sentinel-2 Vegetation Indices for Estimating and Mapping Forest Growing Stock Volume. Remote Sens.; 2023; 15, 2821. [DOI: https://dx.doi.org/10.3390/rs15112821]
60. Li, X.; Tang, L.; Peng, W.; Chen, J. Estimation method of urban green space living vegetation volume based on backpack light detection and ranging. Chin. J. Appl. Ecol.; 2021; 33, pp. 2777-2784. [DOI: https://dx.doi.org/10.13287/j.1001-9332.202210.020]
61. He, C.; Convertino, M.; Feng, Z.; Zhang, S. Using LiDAR data to measure the 3D green biomass of Beijing urban forest in China. PLoS ONE; 2013; 8, e75920. [DOI: https://dx.doi.org/10.1371/journal.pone.0075920] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24146792]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Quantification of three-dimensional green volume (3DGV) plays a crucial role in assessing environmental benefits to urban green space (UGS) at a regional level. However, precisely estimating regional 3DGV based on satellite images remains challenging. In this study, we developed a parametric estimation model to retrieve 3DGV in UGS through combining Sentinel-1 and Sentinel-2 images. Firstly, UAV images were used to calculate the referenced 3DGV based on mean of neighboring pixels (MNP) algorithm. Secondly, we applied the canopy height model (CHM) and Leaf Area Index (LAI) derived from Sentinel-1 and Sentinel-2 images to construct estimation models of 3DGV. Then, we compared the accuracy of estimation models to select the optimal model. Finally, the estimated 3DGV maps were generated using the optimal model, and the referenced 3DGV was employed to evaluate the accuracy of maps. Results indicated that the optimal model was the combination of LAI power model and CHM linear model (3DGV = 37.13·LAI−0.3·CHM + 38.62·LAI1.8 + 13.8, R2 = 0.78, MPE = 8.71%). We validated the optimal model at the study sites and achieved an overall accuracy (OA) of 75.15%; then, this model was used to map 3DGV distribution at the 10 m resolution in Kunming city. These results demonstrated the potential of combining Sentinel-1 and Sentinel-2 images to construct an estimation model for 3DGV retrieval in UGS.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details



1 College of Big Data and Intelligent Engineering, Southwest Forestry University, Kunming 650233, China;
2 Institute of Big Data and Artificial Intelligence, Southwest Forestry University, Kunming 650233, China;
3 College of Forestry, Southwest Forestry University, Kunming 650233, China;
4 Art and Design College, Southwest Forestry University, Kunming 650024, China;