1. Introduction
Accurate identification of tropical forest species would support a more accurate measure of several important species-dependent environmental variables, such as above-ground biomass and carbon uptake [1]. Currently, large uncertainties exist in estimates of biomass and carbon uptake for tropical forests [2]. An improved species inventory leading to better biomass estimates could improve current carbon budget measurements, leading to more accurate carbon offset programs [3]. The use of remote sensing imagery for forest analysis has a long history, from the use of the legacy Landsat systems [4], to contemporary imaging systems such as Landsat 8 [5], Sentinel 2, and high-resolution commercial imaging systems [1,6]. In addition, many studies have attempted to use remotely sensed imagery for tree type identification in complex tropical forest assemblages [4,7].
Many pixel-based classification studies have used several different classification processes (e.g., maximum likelihood, spectral angle mapper, support vector machine, random forest, etc.) to determine which method works best in different types of forests, but with minimal success [8,9,10]. Most pixel-based tropical forest studies generated accuracies ranging from 42–74%, depending on the forest assemblage and environments studied [8,9,10]. Some features, such as tree canopies, are typically not homogeneous, which can lead to a reduction of separability between other features [9]. This can lead to poor feature definition and low classification accuracies [11].
As pixel-base approaches have been the standard for remote sensing classification, Geospatial Object-Based Image Analysis (GEOBIA) has made significant advances in the last decade and has proven to be superior to pixel based approaches [12,13]. Many studies have also evaluated various types of machine learning methods, such as Random Forest (RF), decision trees, Support Vector Machines (SVM) and Artificial Neural Network (ANN) schemas. Utilizing the entire tree canopy (including all of the nuanced variances in the tree crown) rather than individual pixels within the crown or even individual leaves, has proven to be a more accurate method for a classifying complex features such as tree canopies [14]. High spatial resolution imagery can provide multiple pixels per tree canopy, and a segmentation process performed on such an image can yield (with careful tuning of parameters) clusters of pixels that represent single-tree canopy tops. This cluster of pixels, when averaged, contains higher-fidelity representation of the overall canopy reflectance. Object-based classification methods can also add additional whole-object information beyond spectral content such as shape, size, pixel variability and proximity to other objects [9,13] that can provide critical information to identify objects even in the case of non-homogeneity within a defined object. With the introduction of high resolution imagery (nearing 1 m spatial resolution), the application of a segmentation process to identify image objects has been successfully extended to vegetation studies, specifically for the identification of canopy tree types within a forest [13].
Regarding object-based classification, Clark et al. [15] achieved an 87.4% classification accuracy in their study of a tropical forest regime in Costa Rica when using an object-based approach through a Random Forest classifier. Others have directly compared the pixel-based approach to the objects based approach, with the object-based approach achieving superior results in a complex forest setting [14,15,16]. Immitzer et al. [12] used a random forest classifier to compare pixel based versus object-based classification in an Austrian mixed forest using Worldview-2 image data. Manually extracted tree crown data provided ground truth from known tree locations in the study area. An object-based random forest approach provided the highest classification accuracy at 82.4% and the pixel-based classification was on average 10 percentage points lower in accuracy.
An option for a GEOBIA application is a rule-set object-based approach which provides the ability of user input on variable importance and ranges of variable response to better characterize features [17]. The success of this technique depends heavily on the data input types and the segmentation procedure to accurately define the objects being classified. Myint et al. [18] used a rule-set object-based classification to quantify mangrove extent in Bangladesh using Landsat imagery and achieved an overall accuracy of 84.1%, but the authors noted that the settings for segmentation and rules applied to the particular study might not be suitable for other study areas. They also discovered that more bands included in the classification process did not equate to higher accuracies but instead led to lower accuracies due to signature confusion related to high correlation between certain Landsat bands. Additionally, Ke et al. [19] used a rule-based classification to map forests in New York State by using both QuickBird imagery and LiDAR data (tree height measurements) for both the segmentation and rule-set data input. Classification of four specific evergreen species and a broad deciduous species group yielded an accuracy of near 90%. The addition of height information from the LiDAR data increased classification accuracy by approximately 10%, supporting the premise that more and diverse data sets of a study area increases the classification accuracy [19].
In addition, traditional statistical techniques such as Multinomial Logistics Regression and Linear Discriminant Analysis [15,16,20,21,22] have had some success to assist in determining the best combination of data types for classification purposes. Cross et al. [20] used a Discriminant Analysis with a Wilks’ Lambda test to analyze WorldView-3 bands and 14 distinct Spectral Vegetation Indexes (SVIs) in their discriminatory power to differentiate tropical forest tree types in Costa Rica.
Any process of classification of tropical vegetation would need to account for a variety of variables that control intra-species and inter-species variations [11]. Seasonality is an important consideration in any study of the tropical rainforest. Hesketh et al. [23] showed that all data used for classification purposes should be constrained to one of the seasons (wet or dry) as inter-seasonal variability is typically low (with a dry season preference due to showing more differences between features consistently through the season) but variability between seasons is high. Castro et al. [24] also reported this effect, where classifying using data from various seasons or sites can reduce classification accuracy substantially. In addition, lianas can significantly skew the tree crown spectral signature for a given species at the leaf level, and possibly at, the crown level [11,23].
In this study, we build on our previous research [20,25], which determines effective data types for species differentiation and validates WorldView-3 as a viable data source for differentiating tree species respectively, to move the current research forward by utilizing very high resolution image-derived data products in a rule-set object-based classification for accurate identification of tropical forest species. This process contrasts to the complex approaches stated above, where affordable multispectral imagery and a simple, straightforward approach is utilized to differentiate tree species. This approach employed the information from several WorldView-3 image bands and two WorldView-3 image spectral vegetation indices derived in Cross et al. [20] to identify six different tropical forest species in Costa Rica. All data and imagery collected was constrained to the dry season minimizing any variance [23]. Illumination and view angle corrections, and the application of an atmospheric compensation procedure, assisted in creating an accurate forest canopy reflectivity image data set. Guidance on the makeup of the segmentation settings [9,18,26,27] allowed an accurate classification of tree species within the study site.
2. Materials and Methods
We have developed a processing and validation methodology for this study and our previous work [20,25] that applies corrected WorldView-3 multispectral imagery to extract a species canopy map. Tree species identification is achieved through an object-based classification. This is summarized in a flowchart (Figure 1) and discussed in detail below.
2.1. Study Site
La Selva Biological Station in Costa Rica (hereafter, ‘La Selva’) was chosen as the study area for this research due to its accessible location, extensive trail and tower infrastructure, and excellent support staff. La Selva also contains the 3.5 ha Holdridge Arboretum (hereafter, ‘Arboretum’), a managed research area within La Selva established in 1968 (Figure 2), which is the study site for this research effort. The Arboretum contains approximately 929 cataloged plants and trees, with the most recent census occurring in August 2016–March 2017. Within the managed area, 727 trees represent 185 native species [28]. The Arboretum is the focus of many research efforts at La Selva, and provides a baseline for taxonomic studies of tropical tree species within Central America.
2.2. WorldView-3 Imagery and Ground Truth Data Preparation
A logical data choice for achieving the stated goals in this study would be either a hyperspectral system [29,30] and/or a LiDAR system [31,32], as these imaging systems have proven to be excellent choices for species discrimination/identification, especially in complex forest areas [11,33]. The purpose for using WorldView-3 (a multi-spectral sensor imaging system) for this study is to determine if a more cost-effective image data set can be effective in a tropical forest setting. The advantages of the WorldView-3 sensor are a large area collection, good revisit times over broad areas of interest, and overall cost advantages over specialized aircraft-based sensor systems. While the WorldView-3 sensor does not match the spatial resolution of LiDAR or spectral resolution of a hyperspectral sensor, there is sufficient spatial and radiometric resolution to collect many intra-crown pixels within a tree crown, and a sufficient number of image bands for spectral characterization of a particular species [25].
2.2.1. WorldView-3 Imagery Acquisition and Preparation
DigitalGlobe provided the WorldView-3 imagery for this study, and the acquisition from November 11, 2014 was a nearly cloud-free scene from the end of the dry season (Table 1), maximizing our potential of tree species identification [23,24]. We used DigitalGlobe’s proprietary Atmospheric Compensation process (AComp), a physics-based compensation schema that uses observed in-scene pixel spectra for its correction procedure [34] that accounts for water vapor in the atmospheric column, to atmospherically correct the image. The derived pixel-based Aerosol Optical Depth (AOD) information was applied into a radiative transfer schema for the imagery bands collected and the output was an atmospherically compensated image of surface reflectance values per pixel [35]. This was an essential step in acquiring a true surface reflectance measure from the WorldView-3 imagery.
Due to the non-Lambertian characteristics of tree canopies, and with the imagery collected at a non-nadir imaging view angle (Table 1), a Bidirectional Reflectance Factor (BRF) correction was required to convert data to an on-nadir view [36,37,38]. Reflectance anisotropy can vary by up to 30% within a closed forest canopy [39] and high spatial resolution imagery with variable view angles (WorldView-3) can be especially affected [40]. Average BRF correction values (Table 2) for different wavelength regions derived from Breunig et al. [41] for a typical mixed tropical tree canopy, and supporting information from [42], provided the correction parameters needed for the imagery.
Because of the off-nadir image collection and the inherent displacement error due to overall tree canopy height, a georectification process was performed to improve the locational accuracy of trees within the study area. A Trimble GeoExplorer 2008 GPS with a Trimble Zephyr 2 external backpack antenna, utilizing differential corrections through the Satellite Based Augmentation System and employing multipath correction, provided high positional accuracy ground control points (most points were better than 1-m positional accuracy). An image-to-map nearest-neighbor rectification procedure within ENVI provided appropriate geometrically corrected imagery.
2.2.2. Ground Truth Data Collection and Processing
We established ground truth of the tree species studied by finding well-exposed examples of the selected tree species crowns in the field outside of the Arboretum study area. Six select tree species crown collections within La Selva Biological Reserve during May 2017 provided the ground truth necessary for this study (Table 3). All tree samples for ground truth were independent from the Arboretum study site (but still within the confines of La Selva). Tree crown samples were of sufficient size to be at the top of the forest canopy structure and visible in the WorldView-3 image.
The availability of each species within the Arboretum and the clear identification of the tree crown within the imagery determined the tree species selection for this study. The ground truth data collected outside of the Arboretum (Table 3), determined which canopy trees would be a part of the classification analysis within the Arboretum.
Figure 3 illustrates two typical crown data extractions from the WorldView-3 imagery for ground truth purposes.
GPS positions of the selected tree observations, converted to an ESRI Shapefile, provided accurate geolocation for matching the field-identified tree to the proper canopy visible in the WorldView-3 imagery. Additional ESRI Shapefiles provided by the staff at La Selva (trail locations, trail signs, streams and rivers, etc.) assisted in locating selected tree species within the image. An extraction of the full-crown pixel clusters yielded mean image-derived spectra for each crown. Figure 4 illustrates the average reflectance per tree crown for each WorldView-3 band of the ground truth of the six tree species studied, displaying the variance in the reflectivity values between species. These average reflectivity data values and their specific characteristics are typical of trees within the dry season [25].
In addition to the spectral reflectivity for each species, an extracted Canopy Average of the Arboretum was also included in Figure 4. This represents the collective response of canopy vegetation over the study area, and is an important data input to a vegetation index used in this study. A sampling of most of the non-shadow area of the Arboretum yielded a collection of 9750 pixels. Avoiding shadow areas was a priority during the extraction of the Canopy Average to ensure that the value calculated included mostly tree canopies.
An additional data set used for ground truth was an Arboretum inventory data file, which includes precise locations of canopy tree species [28], and was used for determining the accuracy of the classification in this study. A detailed catalog for the Arboretum describes all trees according to species, size measured in Diameter at Breast Height (DBH), and precise geolocation referenced from a permanent 25 m × 25 m grid (azimuth and distance from each grid post). The staff at La Selva produced an ESRI Shapefile (UTM Zone16N grid, meters) of the Arboretum catalog for easy integration with the imagery data [28].
2.3. Analysis
A series of preliminary statistical evaluation procedures helped determine which image data would be most effective for classification. Next, an imagery segmentation process defined tree crowns within the Arboretum, and a rule-set object-based classification approach with the appropriate data input guided the identification of individual tree species. Finally, an error matrix provided a determination of the accuracy of the classification process.
2.3.1. Data Selection
Typically, the most important imagery bands for vegetation are in the mid- to long-wavelength visible range showing the variation in chlorophyll activity, and near infrared, showing the variation in structure and water content. [44]. WorldView-3 has additional image bands in the Blue, Yellow, Red Edge, and two bands in the near-infrared (NIR-1 and NIR-2), providing a greater potential for differentiating vegetation [45]. Chlorophyll absorption is pronounced in the Blue and Red bands [46,47], and the Yellow band holds promise in providing additional information for species identification [48]. The Red Edge band characterizes plant health and is sensitive to Leaf Area Index [47,49,50]. An additional near-infrared band allows more information about vegetation water content and overall structure.
In addition to the sensor band information, we analyzed two Spectral Vegetation Indices (SVIs) to determine their ability to classify vegetation. The use of SVIs has been well documented for determining a variety of vegetation parameters, including chlorophyll production, gross primary productivity, leaf area index, vegetation type, and biomass estimates [51,52,53,54,55]. The creation of these particular SVIs were driven by better tree differentiation within a dense tropical forest canopy [20].
The WorldView Red Edge Slope Weighted Index (WV-RESWI) provides a measure of Red Edge band intensity and a measure of the reflectivity slope between the Red and the Near-IR1 bands of Worldview-3 (Equation (1)). Variation in red absorption by chlorophyll and infrared reflectivity by plant structure are important reflective characteristics that define specific tree species [46,51]:
WV-RESWI = ((Near-IR1 − Red)/0.173) * Red Edge.
The second specialized SVI used in this study is the WorldView Average Canopy Reference Index (WV-ACRI). It is a measure of the differentiation of a specific tree species from the overall reflective response from a complex forest canopy (Equation (2)). The overall infrared response is calculated using the average of the near-infrared bands in WorldView-3 (Near-IR1, Near-IR2), and the visible response is a combination of the Green, Yellow, and Red Edge band reflective measurements as they compare to the Canopy Average for each of those bands. In the equations below, ACnir1 refers to average canopy Near-IR1 band, ACgrn refers to average canopy Green band, ACre refers to the average canopy Red Edge Band, etc. Canopy Average values for each WorldView-3 band provided the data input necessary for the calculation of WV-ACRI for the Arboretum.
WV-ACRI = IRAve + VisRE,
where:
IRAve = ((Near-IR1 − ACnir1) + (Near-IR2 − ACnir2))/2,
VisRE = (Green − ACgrn) + (Yellow − ACyel) + (Red Edge − ACre).
A discriminant analysis (DA) was performed to determine which of the eight multispectral WorldView-3 imagery bands (independent values) and the two SVIs above, had the most discriminatory power for the tree types (specific classes) studied [56]. As a part of the DA, a Wilks’ Lambda test evaluated the discriminatory power of the independent variables [12,29], providing objective clarity as to which WorldView-3 bands and Vegetation Indices are most important for characterizing and differentiating tropical tree species. This test can indicate an initial assessment of group membership from the independent samples and insight into the importance of each independent variable. When the value of Wilks’ Lambda for a value is small, a higher discriminatory ability is realized and corresponds to a statistically greater separability between the classified groups [57]. This is a common use of the Wilks’ Lambda procedure in image analysis [12,29].
Table 4 shows the results of the Wilks’ Lambda test for the six tree species studied on all eight bands of WorldView-3 imagery and the two SVIs. The Wilks’ Lambda scores demonstrated significant discriminatory power in the traditional remote sensing bands typically used for vegetation analysis [44]. All bands and SVIs were significant and were below the threshold of α = 0.05. Both of the specialized SVIs constructed specifically for the WorldView-3 sensor performed as well or better than individual imagery bands in their discriminatory power.
A statistical correlation (Table 5) provided additional information of the importance of each band/index studied. The optimum independent data grouping for a classification would be data with highest discriminatory values and smallest correlation between the variables [12,57].
Figure 5 and Figure 6 show the WorldView-3 band reflectivity and SVI values for the six tree species. Figure 5 shows the visible portion of the spectrum, from the Coastal Band to the Red Band, and Figure 6 illustrates the range of values from the Red Edge Band through the two SVIs analyzed. The graphs visualize the reflective variance, as the visible portion of the spectrum (Figure 5) is approximately one order of magnitude smaller than the other reflectivity values and SVI values (Figure 6).
Both SVIs provide an additional important set of information for species differentiation. The necessity of the additional SVIs is evident in their ability to separate tropical forest species that are extremely close in spectral response through the original WorldView-3 bands, especially in the visible portion of the spectrum and overlap of responses in the near-infrared bands.
2.3.2. Object-Based Classification
A full crown measure of reflectivity for each species (Table 3) is the basis for the classification within the Arboretum. Each of the tree crowns in the study area were organized into specific object segments comprising similar digital pixel values (Figure 7), having defined edges that separate each group from other distinct groupings [17].
The ENVI edge segmentation procedure utilized region-based algorithms which specify spectral similarity (scale setting) and spatial similarity (merge setting). This segmentation procedure is typically less sensitive to slight variations in texture, which can be a significant advantage when using high-resolution imagery [58]. In addition, the edge segmentation is overall a more accurate process for characterizing tree crowns [59,60]. Settings for scale level and merge level must be defined appropriately based on the image resolution and object complexity in the landscape [13]. User-defined thresholds for the scale level and merge level define the segments (with a standard values for both settings from 0 to 100), and the success of the segmentation is dependent on the complexity of the landscape features [58]. Incorrect settings of the scale level and merge level will lead to segments that are too complex, which subdivide known features, or assimilation of multiple features into large polygons [13]. The settings for scale and merge level are defined at the discretion of the investigator, as there is no perfectly objective approach or process [14]. Several tropical forest studies have defined a range between 10 and 40 for the scale level (creating more segments) and between 70 and 90 for the merge level (merging several segments) to ensure a good characterization of features within an image and optimize tree crown definition [9,18,26,27,61]. Through testing of several different combinations of scale and merge values, a setting of 11 for the scale level and 86 for the merge level ensured a good object differentiation in the Arboretum study area (Figure 7). The two SVIs described in this study were the data inputs used for the segmentation process, as each of the SVIs maximize differentiation between species in the study area (Figure 6).
A rule-set object-based classification identified the tree species based only on their distinctive mean crown response from the imagery bands and SVIs from the segmentation [61]. This is in contrast to traditional pixel-based classifications, which classify on individual pixel values and provide a class for each pixel in the imagery. The rule-set process allows the user to define specific data inputs to each class, and each data set can be constrained to a specific range of values that represent a certain class [61]. In order to determine the ability of the imagery to classify tree species, no other values (surface texture, segment metrics, etc.) were included in the classification procedure. We assigned weights to each band or SVI based on their Wilks’ Lambda score within the rule-set process, ensuring the application of each band and SVI was at a level commensurate with their ability to differentiate tree species.
For the rule-set classification process, a value range of +/- 5% from each mean value for each WorldView-3 band and SVI defined a particular tree species. This value range chosen encompasses the variability that can exist within a particular species allowing for any inter-species variation while still maintaining a species-based consistency in mean spectral response within the defined object [25].
An accuracy analysis utilizing an error matrix compared the classification results to known point locations of the same canopy tree species within the Arboretum, determining if the classifier was identifying the existence and location of the tree type studied. The locations of individual tree species within the study area were acquired from the Arboretum inventory data file [28], described earlier in the Materials and Methods Section 2.2.2. The error matrix was applied to Arboretum trees with a DBH >50 cm to ensure that the tree was large enough to be a part of the visible Arboretum canopy [62]. Trees that fell in this category were checked in situ to verify they were a part of the top canopy of the Arboretum.
3. Results and Discussion
The first classification analysis performed (Classification 1) assessed all bands and SVIs available for the process, excluding the Coastal band and Blue band, as both of these bands are unreliable for use in a humid tropical environment as they are susceptible to severe atmospheric attenuation [46]. The data input for Classification 1 included only the top three image bands (Red Edge, Near-IR1 and Near-IR2) and the top SVI (WV-RESWI), based on the Wilk’s Lambda scores and Correlation values. Table 6 shows the results of the error matrix of Classification 1. An “Unknown” classification result was specified to represent trees or ground cover not included in the error matrix analysis.
Classification 2 included additional bands and SVIs based on information from Table 4 and Table 5, and the visual separation of values as shown in Figure 5 and Figure 6. For each tree species, rule weights varied for each data input in the classification rule-set. These were based on the combination of bands or SVIs that best defined a particular species, improving the classification accuracy. The error matrix results of Classification 2 are shown in Table 7.
As is evident from the error matrix results, the WorldView-3 imagery captured the spectral variations between the tree species studied. The low errors of commission (7.89%) and omission (14.63%) in the error matrix for Classification 2 (Table 7) suggest that the segmentation parameters and rule-sets chosen maximized the unique species reflectivity response in the imagery. Figure 8 illustrates the distribution of species in and around the Arboretum as defined by Classification 2.
Miss-classifications could be the result of shadowing within the Arboretum image due to the off-nadir WorldView-3 image acquisition at 26.2° zenith view angle (Table 1), as foreground tree crowns likely obscured or shadowed other possible canopy crowns. There is some evidence of this by the low Canopy Average values derived from the Arboretum (Figure 4). Other variables, such as lianas suspended within the canopies studied, could have also affected classification accuracy [23,24], potentially adding to miss–classifications and increasing the unknown fraction in the error matrix.
Both SVIs significantly improved the segmentation process and overall classification accuracy outperforming all other bands available (Figure 6, Table 4, Table 5). The WV-ACRI showed promise, but its overall performance was likely affected in this study due to suppressed Canopy Average values (Figure 4). The WV-RESWI performed well in showing variability between the chlorophyll production and plant structure because of its focus on the Red Edge variability between species [46].
Over-classifying individual crowns to a specific species was likely, as the rule-set process attempts to fit a tree type to a defined segment that represents all or part of a known tree crown within the study. The high classification accuracy from Classification 2 (85.37%) likely reflects this potential error, as the tree assemblage within the study area is extremely complex, as tree crowns can easily overlap creating confusion in the classification procedure. Figure 8, which shows the geographical distribution of Classification 2 illustrates this effect, where C. alliodora and C. odorata are shown as widespread as P. macroloba in the Arboretum based on the classification, but in truth P. macroloba is the dominant species in this area [28].
4. Conclusions
We sought to establish a process of differentiating selected crown-canopy tree species within the tropical forest regime through the implementation of an object-based rule set classification schema. The complexity of a tropical forest assemblage, both in species diversity and inter-species variability, poses a great challenge to identifying individual tree species [2]. Successfully achieving a deeper understanding of the species assemblage within the tropical forest could improve our overall understanding of the role of forests in climate [1]. The results of this study show that a simple object-based rule-set classification, using readily available multispectral data, can yield accurate results in a complex tropical rainforest.
Characterizing the intra-crown reflectivity response (using the collective pixels in a segment that defines the tree crown) is the key to identification of species in a complex forest [63], as all of the information (and variability) within the tree crown is necessary information to identify tree-species within the forest assemblage. This provides a significant advantage as compared to pixel-based approaches, as the natural variability in the object (the tree canopy) is included in the characterization of the object [12]. This study identified that the necessary components needed to achieve that goal are high spatial and radiometric resolution imagery (e.g., WorldView-3 or similar) with appropriate bands and/or data products [20], careful corrections for illumination and attenuation-absorption effects [25], and a segmentation process that properly assigns single-tree canopies to one (or a few) segmented pixel cluster(s). Continuing to improve the inputs to the object-based rule-set classification parameters (more precise segmentation, better data inputs through field measurements of more tree species) will be important steps in future analyses. This will ultimately contribute to accurately defining more tree species with their unique spectral signatures in other diverse tropical rainforest locations. It is envisioned that this simple, straightforward process can be expanded to more regional scale tropical environments.
[Image omitted. See PDF.]
[Image omitted. See PDF.]
[Image omitted. See PDF.]
[Image omitted. See PDF.]
[Image omitted. See PDF.]
[Image omitted. See PDF.]
[Image omitted. See PDF.]
[Image omitted. See PDF.]
Bands | Spectral Range (nm) | Resolution | |
---|---|---|---|
Panchromatic | 450–800 | Panchromatic | 0.31 m |
Coastal | 400–450 | Multispectral | 1.24 m |
Blue | 450–510 | Dynamic Range | 11 bits/pixel |
Green | 510–580 | Specific Image Information for Study | |
Yellow | 585–625 | Date/time | 11/11/2014, 15:52:28Z |
Red | 630–690 | Zenith, Az. View Angle | 26.2°, 108.8° |
Red Edge | 705–745 | Cloud Cover | 0.5% |
Near-IR1 | 770–895 | Data Extent | |
Near-IR2 | 860–1040 | NW, SE Corners | 10.48° N, 84.14° W, 10.25° N, 83.99° W |
Bands | Spectral Range (nm) | BRF Factor 26.2° OZA |
---|---|---|
Coastal | 400–450 | 1.01 |
Blue | 450–510 | 1.08 |
Green | 510–580 | 1.15 |
Yellow | 585–625 | 1.17 |
Red | 630–690 | 1.20 |
Red Edge | 705–745 | 1.25 |
Near IR1 | 770–895 | 1.33 |
Near IR2 | 860–1040 | 1.34 |
Tree Species | Family | Approximate Location | Total Crowns | Total Pixels | Collection Size (m2) |
---|---|---|---|---|---|
Castilla elastica | Moraceae | Lab area and west La Selva | 3 | 81 | 100.44 |
Cedrela odorata | Meliaceae | Lab area and east La Selva | 2 | 81 | 100.44 |
Cordia alliodora | Boraginaceae | Arboretum trail from lab | 2 | 39 | 48.36 |
Pentaclethra macroloba | Fabaceae | Station entrance and west La Selva | 9 | 217 | 269.08 |
Pterocarpus sp. A | Fabaceae | Lab area and east La Selva | 2 | 128 | 158.72 |
Stryphnodendron microstachyum | Fabaceae | Station entrance and lab area | 3 | 87 | 107.88 |
Band Locations | Wilks’ Lambda | F-test |
---|---|---|
Coastal | 0.862 | 20.090 |
Blue | 0.662 | 64.082 |
Green | 0.717 | 49.514 |
Yellow | 0.710 | 51.138 |
Red | 0.626 | 74.983 |
Red Edge | 0.560 | 98.605 |
Near-IR1 | 0.500 | 125.583 |
Near-IR2 | 0.529 | 111.681 |
WV-ACRI | 0.529 | 111.568 |
WV-RESWI | 0.485 | 133.042 |
WV-3 Bands and Indices | Coastal | Blue | Green | Yellow | Red | Red Edge | Near-IR1 | Near-IR2 | WV-ACRI | WV-RESWI |
---|---|---|---|---|---|---|---|---|---|---|
Coastal | 1.000 | .321 | .212 | .257 | .326 | .081 | −.017 | .002 | .058 | .014 |
Blue | 1.000 | .522 | .529 | .621 | .236 | .041 | .081 | .202 | .109 | |
Green | 1.000 | .683 | .586 | .587 | .369 | .381 | .580 | .454 | ||
Yellow | 1.000 | .647 | .446 | .137 | .183 | .387 | .257 | |||
Red | 1.000 | .153 | −.054 | −.017 | .123 | −.004 | ||||
Red Edge | 1.000 | .757 | .817 | .943 | .903 | |||||
Near-IR1 | 1.000 | .766 | .891 | .942 | ||||||
Near-IR2 | 1.000 | .914 | .837 | |||||||
WV-ACRI | 1.000 | .958 | ||||||||
WV-RESWI | 1.000 |
Bands/Indices Used: Red Edge, Near-IR1, Near-IR2, WV-RESWI | ||||||||
---|---|---|---|---|---|---|---|---|
Overall Accuracy = 75.61%, Kappa = 0.685 | ||||||||
Classification | Field Reference | Users Acc. | ||||||
C. elastica | C. odorata | C. alliodora | P. macroloba | P. sp. A | S. micro. | Total | ||
C. elastica | 1 | 1 | 100% | |||||
C. odorata | 1 | 2 | 1 | 4 | 50.00% | |||
C. alliodora | 1 | 10 | 11 | 90.91% | ||||
P. macroloba | 13 | 13 | 100% | |||||
P. sp. A | 1 | 4 | 5 | 80.00% | ||||
S. micro. | 1 | 2 | 1 | 4 | 25.00% | |||
Unknown | 1 | 1 | 1 | 3 | ||||
Total | 3 | 3 | 12 | 15 | 7 | 1 | 41 | |
Prod. Acc. | 33.33% | 66.67% | 83.33% | 86.67% | 57.14% | 100% | 75.61% |
Bands/Index Used: Dependent on Rule-Set for Each Species | ||||||||
---|---|---|---|---|---|---|---|---|
Overall Accuracy = 85.37%, Kappa = 0.808 | ||||||||
Classification | Field Reference | Users Acc. | ||||||
C. elastica | C. odorata | C. alliodora | P. macroloba | P. sp. A | S. micro. | Total | ||
C. elastica | 2 | 2 | 100% | |||||
C. odorata | 3 | 1 | 4 | 75.00% | ||||
C. alliodora | 1 | 11 | 12 | 91.67% | ||||
P. macroloba | 13 | 13 | 100% | |||||
P. sp. A | 1 | 5 | 6 | 83.33% | ||||
S. micro. | 1 | 1 | 100% | |||||
Unknown | 1 | 1 | 1 | 3 | ||||
Total | 3 | 3 | 12 | 15 | 7 | 1 | 41 | |
Prod. Acc. | 66.67% | 100% | 91.67% | 86.67% | 71.43% | 100% | 85.37% |
Author Contributions
The following contributions were given to this research effort: Conceptualization, M.C., T.S., R.M.-S. and W.M.; Formal analysis, M.C. and F.P.; Methodology, M.C., T.S. and R.M.-S.; Project administration, M.C. and T.S.; Resources, F.P. and O.V.-R.; Software, M.C.; Validation, M.C., F.P. and O.V.-R.; Visualization, M.C.; Writing—original draft, M.C., T.S., F.P., O.V.-R., R.M.-S. and W.M.; Writing—review & editing, M.C., T.S., F.P., R.M.-S. and W.M.
Funding
This research was supported in part by US Geological Survey contract 140G0118C0005.
Acknowledgments
We would like to thank the staff at La Selva Biological Station in Costa Rica for their support of this research. We would also like to thank DigitalGlobe for the use of imagery for the La Selva Biological station area of study.
Conflicts of Interest
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
1. Wulder, M.A.; Hall, R.J.; Coops, N.C.; Franklin, S.E. High Spatial Resolution Remotely Sensed Data for Ecosystem Characterization. Bioscience 2004, 54, 511–521.
2. Barbosa, J.M.; Broadbent, E.N.; Bitencourt, M.D. Remote Sensing of Aboveground Biomass in Tropical Secondary Forests: A Review. Int. J. Res. 2014, 2014, 715796.
3. Cifuentes-Jara, M.; Henry, M. Proceedings of the Regional Technical Workshop on Tree Volume and Biomass Allometric Equations in South and Central America; UN-REDD Programme: Geneva, Switzerland, 2014; Volume 92.
4. Li, G.; Lu, D.; Moran, E.; Hetrick, S. Land-cover classification in a moist tropical region of Brazil with Landsat Thematic Mapper imagery. Int. J. Remote Sens. 2011, 32, 8207–8230.
5. Roy, D.P.; Zhang, H.K.; Ju, J.; Gomez-Dans, J.L.; Lewis, P.E.; Schaaf, C.B.; Sun, Q.; Li, J.; Huang, H.; Kovalskyy, V. A general method to normalize Landsat reflectance data to nadir BRDF adjusted reflectance. Remote Sens. Environ. 2016, 176, 255–271.
6. Katoh, M. Classifying tree species in a northern mixed forest using high-resolution IKONOS data. J. Res. 2004, 9, 7–14.
7. Carleer, A.; Wolff, E. Exploitation of very high resolution satellite data for tree species identification. Photogramm. Eng. Remote Sens. 2004, 70, 135–140.
8. Shafri, H.Z.M.; Suhaili, A.; Mansor, S. The Performance of Maximum Likelihood, Spectral Angle Mapper, Neural Network and Decision Tree Classifiers in Hyperspectral Image Analysis. J. Comput. Sci. 2007, 3, 419–423.
9. Yu, Q.; Gong, P.; Clinton, N.; Biging, G.; Kelly, M.; Schirokauer, D. Object-based detailed vegetation classification with airborne high spatial resolution remote sensing imagery. Photogramm. Eng. Remote Sens. 2006, 72, 799–811.
10. Plourde, L.; Congalton, R.G. Sampling Method and Sample Placement. Photogramm. Eng. Remote Sens. 2003, 69, 289–297.
11. Zhang, J.; Rivard, B.; Sánchez-Azofeifa, A.; Castro-Esau, K. Intra- and inter-class spectral variability of tropical tree species at La Selva, Costa Rica: Implications for species identification using HYDICE imagery. Remote Sens. Environ. 2006, 105, 129–141.
12. Immitzer, M.; Atzberger, C.; Koukal, T. Tree species classification with Random forest using very high spatial resolution 8-band worldView-2 satellite data. Remote Sens. 2012, 4, 2661–2693.
13. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16.
14. Myint, S.W.; Gober, P.; Brazel, A.; Grossman-Clarke, S.; Weng, Q. Per-pixel vs. object-based classification of urban land cover extraction using high spatial resolution imagery. Remote Sens. Environ. 2011, 115, 1145–1161.
15. Clark, M.L.; Roberts, D.A. Species-level differences in hyperspectral metrics among tropical rainforest trees as determined by a tree-based classifier. Remote Sens. 2012, 4, 1820–1855.
16. Feret, J.-B.; Asner, G.P. Tree Species Discrimination in Tropical Forests Using Airborne Imaging Spectroscopy. IEEE Trans. Geosci. Remote Sens. 2013, 51, 73–84.
17. van der Sande, C.J.; de Jong, S.M.; de Roo, A.P.J. A segmentation and classification approach of IKONOS-2 imagery for land cover mapping to assist flood risk and flood damage assessment. Int. J. Appl. Earth Obs. Geoinf. 2003, 4, 217–229.
18. Myint, S.W.; Franklin, J.; Buenemann, M.; Kim, W.K.; Gini, C.P. Examining Change Detection Approaches for Tropical Mangrove Monitoring. Photogramm. Eng. Remote Sens. 2014, 80, 983–993.
19. Ke, Y.; Quackenbush, L.J.; Im, J. Synergistic use of QuickBird multispectral imagery and LIDAR data for object-based forest species classification. Remote Sens. Environ. 2010, 114, 1141–1154.
20. Cross, M.D.; Scambos, T.; Pacifici, F.; Marshall, W. Determining Effective Meter-scale Image Data and Spectral Vegetation Indicies for Tropical Forest Species Differentiation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, in press.
21. Prospere, K.; McLaren, K.; Wilson, B. Plant species discrimination in a tropical wetland using in situ hyperspectral data. Remote Sens. 2014, 6, 8494–8523.
22. Tuia, D.; Ratle, F.; Pacifici, F.; Kanevski, M.F.; Emery, W.J. Active Learning Methods for Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2218–2232.
23. Hesketh, M.; Sánchez-Azofeifa, G.A. The effect of seasonal spectral variation on species classification in the Panamanian tropical forest. Remote Sens. Environ. 2012, 118, 73–82.
24. Castro-Esau, K.L.; Sánchez-Azofeifa, G.A.; Rivard, B.; Wright, S.J.; Quesada, M. Variability in leaf optical properties of mesoamerican trees and the potential for species classification. Am. J. Bot. 2006, 93, 517–530.
25. Cross, M.D.; Scambos, T.; Pacifici, F.; Marshal, W. Validating the Use of Metre-Scale Multi-Spectral Satellite Image Data for Identifying Tropical Forest Tree Species. Int. J. Remote Sens. 2018, 39, 3723–3752.
26. Latif, Z.A.B.D.; Ibrahim, N. Tree Species Identification Using High Resolution Remotely-Sensed Data Tree Species Identification Using High Resolution Remotely-Sensed Data. In Proceedings of the FIG Congress, Kuala Lumpur, Malasysia, 16–21 June 2014; pp. 16–21.
27. Wang, L.; Sousa, W.P.; Gong, P.; Biging, G.S. Comparison of IKONOS and QuickBird images for mapping mangrove species on the Caribbean coast of Panama. Remote Sens. Environ. 2004, 91, 432–440.
28. Vargas, O.; Castro, E. Species List of the Leslie R. Holdridge Arboretum; Organization for Tropical Studies (OTS), Scientific Department La Selva Bilogical Station: Puerto Viejo de Sarapiqui, Costa Rica, 2017.
29. Papeş, M.; Tupayachi, R.; Martínez, P.; Peterson, A.T.; Powell, G.V.N. Using hyperspectral satellite imagery for regional inventories: A test with tropical emergent trees in the Amazon Basin. J. Veg. Sci. 2010, 21, 342–354.
30. Clark, M.L.; Roberts, D.A.; Clark, D.B. Hyperspectral discrimination of tropical rain forest tree species at leaf to crown scales. Remote Sens. Environ. 2005, 96, 375–398.
31. Bergen, K.M.; Goetz, S.J.; Dubayah, R.O.; Henebry, G.M.; Hunsaker, C.T.; Imhoff, M.L.; Nelson, R.F.; Parker, G.G.; Radeloff, V.C. Remote sensing of vegetation 3-D structure for biodiversity and habitat: Review and implications for lidar and radar spaceborne missions. J. Geophys. Res. Biogeosci. 2009, 114, 1–13.
32. Evans, J.S.; Hudak, A.T.; Faux, R.; Smith, A.M.S. Discrete return lidar in natural resources: Recommendations for project planning, data processing, and deliverables. Remote Sens. 2009, 1, 776–794.
33. van Leeuwen, M.; Nieuwenhuis, M. Retrieval of forest structural parameters using LiDAR remote sensing. Eur. J. Res. 2010, 129, 749–770.
34. Pacifici, F. An automatic atmospheric compensation algorithm for very high spatial resolution imagery and its comparison to FLAASH and QUAC. In Proceedings of the Joint Agency Commercial Imagery Evaluation (JACIE) Workshop, Saint Louis, MO, USA, 16–18 April 2013; pp. 16–18.
35. Pacifici, F. Validation of the Digital Globe Surface Reflectance Product. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 1973–1975.
36. Moura, Y.M.; Galvão, L.S.; dos Santos, J.R.; Roberts, D.A.; Breunig, F.M. Use of MISR/Terra data to study intra- and inter-annual EVI variations in the dry season of tropical forest. Remote Sens. Environ. 2012, 127, 260–270.
37. Bousquet, L.; Lachérade, S.; Jacquemoud, S.; Moya, I. Leaf BRDF measurements and model for specular and diffuse components differentiation. Remote Sens. Environ. 2005, 98, 201–211.
38. Middleton, E.M. Quantifying Reflectance Anisotropy of Photosynthetically Active Radiation in Grasslands. J. Geophys. Res. 1992, 97, 18935–18946.
39. Weyermann, J.; Damm, A.; Kneubühler, M.; Schaepman, M.E. Correction of reflectance anisotropy effects of vegetation on airborne spectroscopy data and derived products. IEEE Trans. Geosci. Remote Sens. 2014, 52, 616–627.
40. Pacifici, F.; Longbotham, N.; Emery, W.J. The importance of physical quantities for the analysis of multitemporal and multiangular optical very high spatial resolution images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6241–6256.
41. Breunig, F.M.; Galvao, L.S.; Moura, Y.M.; Balbinout, R. Preliminary results of the BRF dependence of a subtropical semideciduous forest on angular and directional effects. In Proceedings of the Anais XVI Simpósio Brasileiro de Sensoriamento Remoto SBSR, Foz do Iguaçu, Brazil, 13–18 April 2013; Volume 1986, pp. 6917–6922.
42. Gastellu-Etchegorry, J.P.; Guillevic, P.; Zagolksi, F.; Demarez, V.; Trichon, V.; Deering, D.; Leroy, M. Modeling BRF and radiation regime of tropical and boreal forests, Part I: BRF. Remote Sens. Environ. 1999, 68, 281.
43. Digital Globe WorldView-3 Data Sheet. Digit. Ds-Wv3 09/14. 2014, p. 2. Available online: https://www.digitalglobe.com (accessed on 12 June 2019).
44. Campbell James, B.; Wayne, R.H. Introduction to Remote Sensing, 5th ed.; The Guilford Press: New York, NY, USA, 2011.
45. Wolf, A.F. Using Worldview-2 Vis-NIR Multispectral Imagery to Support Land Mapping and Feature Extraction Using Normalized Difference Index Ratios. In Proceedings of the Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII. International Society for Optics and Photonics, Baltimore, MD, USA, 18–21 April 2016; Volume 8390, pp. 1–8.
46. Lillesand, T.M.; Kiefer, R.W.; Chipman, J.W. Remote Sensing and Image Interpretation, 6th ed.; John Wiley and Sons Ltd.: London, UK, 2008.
47. Gitelson, A.A.; Merzlyak, M.N. Remote sensing of chlorophyll concentration in higher plant leaves. Adv. Sp. Res. 1998, 22, 689–692.
48. Asner, G.P. Biophysical and biochemical sources of variability in canopy reflectance. Remote Sens. Environ. 1998, 64, 234–253.
49. Delegido, J.; Verrelst, J.; Meza, C.M.; Rivera, J.P.; Alonso, L.; Moreno, J. A red-edge spectral index for remote sensing estimation of green LAI over agroecosystems. Eur. J. Agron. 2013, 46, 42–52.
50. Merton, R.; Huntington, J. Early Simulation Results of the Aries-1 Satellite Sensor for Multi-Temporal Vegetation Research Derived from Aviris. In Proceedings of the Eighth Annual JPL Airborne Earth Science Workshop, Pasadena, CA, USA, 9–11 February 1999; pp. 9–11.
51. Xue, J.; Su, B. Significant Remote Sensing Vegetation Indices: A Review of Developments and Applications. J. Sens. 2017, 2017, 1353691.
52. Gitelson, A.A.; Vina, A.; Masek, J.G.; Verma, S.B.; Suyker, A.E. Synoptic monitoring of gross primary productivity of maize using Landsat data. IEEE Geosci. Remote Sens. Lett. 2008, 5, 133–137.
53. van Leeuwen, W.J.D.; Orr, B.J. Spectral vegetation indices and uncertainty: Insights from a user’s perspective. IEEE Trans. Geosci. Remote Sens. 2006, 44, 1931–1933.
54. Gao, X.; Huete, A.R.; Ni, W.; Miura, T. Optical-biophysical relationships of vegetation spectra without background contamination. Remote Sens. Environ. 2000, 74, 609–620.
55. Price, J.C. Estimating leaf area index from satellite data. IEEE Geosci. Remote Sens. 1993, 31, 727–734.
56. Hair, J.F.J.; Black, W.C.; Babin, B.J.; Anderson, R.E. Multivariate Data Analysis, 7th ed.; Pearson Prentice Hall: Upper Saddle River, NJ, USA, 2010.
57. Thenkabail, P.S.; Enclona, E.A.; Ashton, M.S.; Legg, C.; De Dieu, M.J. Hyperion, IKONOS, ALI, and ETM+ sensors in the study of African rainforests. Remote Sens. Environ. 2004, 90, 23–43.
58. Carleer, A.; Debeir, O.; Wolff, E. Assessment of very high spatial resolution satellite image segmentations. Photogramm. Eng. 2005, 71, 1285–1294.
59. Singh, M.; Evans, D.; Tan, B.S.; Nin, C.S. Mapping and characterizing selected canopy tree species at the Angkor world heritage site in Cambodia using aerial data. PLoS ONE 2015, 10, 1–26.
60. Ferreira, M.P.; Zanotta, D.C.; Zortea, M.; Korting, T.S.; Fonseca, L.M.G.; Shimabukuro, Y.E.; Filho, C.R.S. Automatic tree crown delineation in tropical forest using hyperspectral data. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; pp. 784–787.
61. ENVI, ENVI 5.4, Feature Extraction with Rule-Based Classification. 2019. Harris Geospatial Solutions. Available online: https://www.harrisgeospatial.com (accessed on 12 June 2019).
62. Chave, J.; Rejou-Mechain, M.; Burquez, A.; Chidumayo, E.; Colgan, M.S.; Delitti, W.B.C.; Duque, A.; Eid, T.; Fearnside, P.M.; Goodman, R.C.; et al. Improved allometric models to estimate the aboveground biomass of tropical trees. Glob. Chang. Biol. 2014, 20, 3177–3190.
63. Cho, M.A.; Sobhan, I.; Skidmore, A.K.; de Leeuw, J. Discriminating Species Using Hyperspectral Indices at Leaf and Canopy Scales. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 1–8.
1GES, University of Colorado Denver, Denver, CO 80204, USA
2CIRES, University of Colorado Boulder, Boulder, CO 80303, USA
3DigitalGlobe, Denver, CO 80234, USA
4La Selva Biological Station, Puerto Viejo de Sarapiqui, 41001, Costa Rica
5Department of Civil Engineering, University of Colorado Denver, Denver, CO 80217, USA
*Author to whom correspondence should be addressed.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2019. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Accurate classification of tropical tree species is critical for understanding forest habitat, biodiversity, forest composition, biomass, and the role of trees in climate variability through carbon uptake. The aim of this study is to establish an accurate classification procedure for tropical tree species, specifically testing the feasibility of WorldView-3 (WV-3) multispectral imagery for this task. The specific study site is a defined arboretum within a well-known tropical forest research location in Costa Rica (La Selva Biological Station). An object-based classification is the basis for the analysis to classify six selected tree species. A combination of pre-processed WV-3 bands were inputs to the classification, and an edge segmentation process defined multi-pixel-scale tree canopies. WorldView-3 bands in the Green, Red, Red Edge, and Near-Infrared 2, particularly when incorporated in two specialized vegetation indices, provide high discrimination among the selected species. Classification results yield an accuracy of 85.37%, with minimal errors of commission (7.89%) and omission (14.63%). Shadowing in the satellite imagery had a significant effect on segmentation accuracy (identifying single-species canopy tops) and on classification. The methodology presented provides a path to better characterization of tropical forest species distribution and overall composition for improving biomass studies in a tropical environment.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer