It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Forest understory vegetation is an important feature of wildlife habitat among other things. Predicting and mapping understory is a critical need for forest management and conservation planning, but it has proved difficult. LiDAR has the potential to generate remotely sensed forest understory structure data, yet this potential has to be fully validated. Our objective was to examine the capacity of LiDAR point cloud data to predict forest understory cover. We modeled ground-based observations of understory structure in three vertical strata (0.5 m to < 1.5 m, 1.5 m to < 2.5 m, 2.5 m to < 3.5 m) as a function of a variety of LiDAR metrics using both mixed-effects and Random Forest models. We compared four understory LiDAR metrics designed to control for the spatial heterogeneity of sampling density. The four metrics were highly correlated and they all produced high values of variance explained in mixed-effects models. The top-ranked model used a voxel-based understory metric along with vertical stratum (Akaike weight = 1, explained variance = 87%, SMAPE=15.6%). We found evidence of occlusion of LiDAR pulses in the lowest stratum but no evidence that the occlusion influenced the predictability of understory structure. The Random Forest model results were consistent with those of the mixed-effects models, in that all four understory LiDAR metrics were identified as important, along with vertical stratum. The Random Forest model explained 74.4% of the variance, but had a lower cross-validation error of 12.9%. Based on these results, we conclude that the best approach to predict understory structure is using the mixed-effects model with the voxel-based understory LiDAR metric along with vertical stratum, but that other understory LiDAR metrics (fractional cover, normalized cover and leaf area density) would still be effective in mixed-effects and Random Forest modelling approaches.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer