-
Abbreviations
- GPS
- global positioning system
- NDVI
- normalized difference vegetation index
- RGB
- red, green, and blue
- ROI
- region of interest
- UAV
- unoccupied aerial vehicle
Recent advances in remote sensing platforms and sensor technology fast-tracked plant phenotyping research for applications in precision agriculture and plant breeding. High-throughput imaging platforms such as aerial/gantry-based methods facilitate the rapid acquisition of imagery data; however, bottlenecks related to plot-level data extraction still exist. In plant breeding studies that require phenotyping of thousands of plots, data extraction has been considered one of the most time-consuming activities (Shi, et al., 2016). Phenotypic precision and the ability to screen a large number of populations are essential aspects of high-throughput phenotyping for attaining genetic gains in breeding studies (Reynolds et al., 2020). Therefore, workflows that allow rapid data extraction are important to make phenotyping a valuable component in crop improvement programs.
Plot segmentation is a key step in the image-based phenotypic analysis. This is the process of identifying individual plots in an orthomosaic, followed by data extraction. Several methods have been developed to segment and extract phenometric data from the experimental units in response to the high-throughput demand. Head-up digitizing is a visualization approach in which plot boundaries were delineated manually from an image (Ubukawa et al., 2014). Although this approach produces high accuracy maps, the time requirement is the major limitation that makes it impractical for large breeding trials. In the field-map-based method, a single plot is located and based on its position in the field, and the remaining plots are estimated using plot dimensional data of the experimental setup (Drover et al., 2018; Haghighattalab et al., 2016). This method is user-friendly; however, adjustment of boundary boxes of individual plots may be needed because distances between the plots, especially in large breeding experiments, are not always consistent due to seeding equipment errors, image distortions in the mosaicking process (Anderson et al., 2019) and irregular patterns of plant growth caused by genotype and environment variation. Furthermore, several other programs were developed for plot-level data extraction (Anderson & Murray, 2020). Bruce et al. (2020); Matias et al., 2020; Morales et al., 2020 proposed a precision agriculture equipment-based approach, where plot boundaries are extracted from precisely geo-referenced unoccupied aerial vehicle (UAV) imagery and GPS (global positioning system) information from seed planters; however, such GPS data may not be available if plots are not spaced regularly and initiated from a GPS signal.
Image-based methods rely on classification techniques (Fareed & Rehman, 2020; Hassanein et al., 2019). Generally, classifiers such as vegetation indices, color space pixel values, and image patterns are computed to highlight and locate plots. Image-based methods have the potential to detect irregular-sized or spaced plots; however, the accuracy depends on the vegetation cover (Haghighattalab et al., 2016), which is not always consistent in most breeding studies. Image-based methods have limitations in locating plots with overlapped vegetation and no vegetation cover. Under field conditions, there are several factors such as diversity of the populations, certain types of treatment effects (pre-emergent herbicides), disease/insect damage, seed viability, plant vigor, and environmental factors (such as soil moisture and wind erosion) can result in plots with inconsistent vegetation cover. In such cases, depending solely on image-based methods may adversely affect the precision of plot detection.
Alternatively, a combination of both field map-based and image classification methods can be used to overcome most of the existing limitations with plot segmentation. It was hypothesized that using this method, both plots with and without vegetation can be located. The objective of this study was to develop a plot boundary extraction workflow for irregularly sized and spaced field plots from UAV imagery using plot spacing data and vegetation index-based classifiers.
- Conventional plot extraction methods are unreliable if field plots differ in size and spacing.
- A combination of field-map data and image classification can be used to optimize plot extraction.
- A simple convolutional filter, image thresholding, and pixel resizing were used in the study.
- The workflow avoided the use of complex algorithms for plot boundary extraction of uneven plots.
A lentil (Lens culinaris Medik.) herbicide screening study was used to develop the plot boundary workflow. The objective of the study was to select for crop tolerance to saflufenacil, a Group 14 herbicide. The study consisted of 780 F2 generation lentil populations planted in a randomized complete block design with three blocks, each block separated by a horizontal row of pea (Pisum sativum L.) plots (Figure 1a). Each plot consisted of two 1-m rows of lentil plants spaced at 80 cm between rows with a seeding rate of 25 seeds per row. We considered each crop row as a separate plot to test the robustness of our methodology. Therefore, the total number of row plots in the study was 4,836, including both lentil (4,680) and block-separating pea plots (156).
FIGURE 1. The red boundary line shows the color orthomosaic of replicated lentil herbicide screening study with original map direction (a). The black arrows in panel A represents the horizontal row of pea plots that separates lentil block or replications. The subset frames show magnified versions of color (b) and vegetation index (c) after the map rotation. The white dotted rectangle in panel b shows the horizontal row of plots, and the black dotted rectangle represents the vertical crop row (plot). Each horizontal row of plots is separated by 1 m alleys. Panel d indicates the range of herbicide damage to lentil populations (white rectangles) in the study
The experiment was seeded on 16 May 2018, and a pre-emergence treatment of 400 g a.i. ha−1 saflufenacil herbicide was applied on 18 May 2018. Because of the diversity of the lentil populations in the study, a wide range of herbicide responses ranging from unaffected to no crop emergence resulted in field plots of different shapes and sizes throughout the experimental area (Figure 1).
A DJI M600 hexacopter UAV (SZ DJI Technology Co., Ltd.) equipped with a Micasense RedEdge 3 multispectral camera (MicaSense Inc.) was used for image acquisition.
The RedEdge camera is equipped with a DLS (downwelling light sensor) that records the incident light used for image calibration purposes. The images were captured on 10 July 2018 when the crop was at full canopy stage from an altitude of 30 m above ground level with a nadir view while maintaining 80% frontal and 80% side image overlap. The ground sample distance at 30 m above ground level is ∼2 cm. For the plot boundary study, only multispectral imagery was used. The red, green, and blue (RGB) color orthomosaic generated in the study was a composite of RGB bands of the multispectral imagery of the RedEdge camera (Figure 1a). ArcMap version 10.7 (ESRI Inc.) was used for this. Pix4Dmapper Pro (Pix4D SA) was used for image matching, bundle block adjustment, radiometric calibration, and orthomosaic generation (raster).
Plot extractionThe plot extraction workflow was compiled using eCognition developer version 9.5 software (Trimble). The eCognition developer is an object-oriented image analysis software used to develop rulesets for the automatic analysis of remote sensing data. The workflow included five major steps: (a) map rotation, (b) canopy detection, (c) horizontal row detection, (d) vertical row detection, and (e) and plot boundary exportation. An overview of the workflow is presented in Figure 2. The words that are italicized (vertical and horizontal highlight) in the text are thematic layers of the workflow pipeline
FIGURE 2. The overall workflow for lentil plot extraction from unoccupied aerial vehicle multispectral imagery. The bolded parts are the important steps of the workflow. NDVI, normalized difference vegetation index
The workflow was initiated with the rotation of the orthomosaic map, as the crop rows/plots in the image were positioned at an angle to the North. The main purpose of map rotation was to adjust the map direction so that the horizontal row of plots aligned with east-west direction and vertical crop rows aligned with north-south direction to facilitate the subsequent steps (horizontal and vertical row detection). A field boundary or region of interest (ROI) in shapefile format was created manually (Figure 1a) in ArcMap version 10.7 (ESRI Inc., Redlands, CA, USA) and used as an input ROI. This ROI was converted into a raster layer and used as a mask for the ROI.
The main direction variable was extracted from the ROI file in eCognition software. The ROI file (red rectangle, Figure 1a) was imported, and the main direction of the file was extracted using the update variable algorithm (insert process/update variable/by feature/shape/main direction). The whole map (including the image and shapefile) was rotated using a copy map algorithm (insert process/copy map). Rotate angle, an input parameter of the algorithm, was calculated as [Image Omitted. See PDF]
Canopy detectionAs this was an herbicide experiment, weed growth between crop rows was considered as noise. The main purpose of this step is to create a temporary raster layer highlighting vertical crop rows with high vegetation index and lengthy vertical in shape). From the input image (Figure 3a), Normalized difference vegetation index (NDVI) was calculated and inverted using the following equations: [Image Omitted. See PDF][Image Omitted. See PDF]where R840 and R668 are the reflectance values at bands centered on 840 nm (NIR) and 668 nm (red), respectively; Max is the maximum value of the NDVI, and Min is the minimum value of the NDVI within the ROI.
FIGURE 3. Subset maps of lentil plots in red, green, and blue (a) and a raster layer highlighted vertical crop rows (also plots; b); the raster layer of the horizontal crop row enhancement (highlighted in pink) (c), vector map of horizontal crop rows (d), vector map of vertical crop row (purple, e); vector map of dense vegetation plots (purple) with coating region in yellow (f), vector layer with potential plot region (light grey) vertically extended from dense plots and non-horizontal rows (g); interpolated rows after pixel resizing (h); and all detected plots after merging potential plot into vertical crop row (i)
For the characterization of crop rows, line detection was applied on the NDVIinversion layer with specific parameters. All parameters were determined by the try-and-fail assessment method (Table 1). Accordingly, a new raster layer was generated, and pixel values of the new raster layer (line band) were recalculated based on the input pixel signal strength and the combined filter parameters, including line direction, line length, line width, border width, and max similarity to line border. The direction, length, width of the extracted line was specified by the line direction, length, and width. Border width is the width of the homogenous border at the side of the extracted line. Max similarity, which ranged from 0 to 1, was used to specify the similarity of the input to its border. A line was characterized by a small similarity of a line to its border. Vertical crop rows were highlighted when all these parameters met.
TABLE 1 Parameter values for vertical crop row detection
Algorithm parameter | Line direction | Line length | Line width | Border width | Max similarity | Input layer | Output layer |
Value | 90 | 10 | 7 | 5 | 0.8 | NDVIinversion | Line band |
A cleaning-up step was also applied on the line band to reduce noise pixels that are outside of the ROI or have lower NDVI values. Using band math (multiplication) of ROI × line band × NDVI, a cleaner line band (line mask) was generated. It is worthy to note that only certain plots that met the criteria in Table 1 were highlighted in this output layer (Figure 3b). This vertical highlight was later used to detect horizontal rows and plots (vertical rows) with sparse or no vegetation.
Horizontal row detectionThe input RGB map within the ROI was segmented into equal size matrix grids to detect the horizontal rows. Segmentation was conducted using a chessboard segmentation algorithm with the initiation object size of 3 × 3 pixels (insert process/chessboard segmentation). The grids with the same x coordinate value of the rotated image were merged to form rectangle objects parallel with horizontal crop rows. This process was conducted using a multiple object difference conditions-based fusion algorithm of y center coordinate (insert process/multiple object difference conditions-based fusion). The x coordinate of the rotated image is temporary and does not relate to the real position of the field. The mean value on the vertical highlight layer (line band) was calculated (insert process/layer arithmetic) for each horizontal crop row and saved into a new map layer—mean horizontal. This process was looped 10 times with an increment in object size of +5 pixels (insert process/update variable/object variable + = 5); the subsequent output was added to the first map layer (mean horizontal). The resulting horizontal rows with higher crop density constituted the highlighted map layer—horizontal highlight (raster layer, Figure 3c).
From the horizontal highlight, an object detection algorithm was used to detect horizontal rows. This is a process of converting the raster layer (horizontal highlight) into object features (horizontal row in vector layer). Again, chessboard segmentation with the object size of 15 × 15 pixels was conducted on the ROI; then, the grids with the same x coordinate value were merged to form rectangle objects, which were parallel with horizontal crop rows. Local maxima algorithm was employed on the rectangle objects to find the one with maximum values in the horizontal highlight (insert process/find local extrema/extrema type = maximum). In this step, a searching distance range of 15 pixels (based on horizontal row distance) was set, and horizontal rows were detected and classified—horizontal rows. Once the rows were located, pixel resizing was applied to define the width of the horizontal rows (also vertical plot length). Overall, this step created a vector map with two classes: horizontal row and nonhorizontal row (Figure 3d).
Vertical row detection (or plot detection)To detect the vertical rows or plots, two sub-rulesets were compiled, namely (a) dense canopy plot extraction and (b) sparse vegetation plot extraction. In dense canopy plot extraction, image thresholding on the vertical highlight layer was performed to locate vertical row objects (vertical highlight > 0, Figure 3e). Vertical row objects with lengths greater than 25 pixels were used as seeding objects; seeding objects are defined as a group of pixel elements that allow us to locate dense plots (insert process/multi-threshold segmentation/threshold value = 0 and object length > 25 pixels). The object length of 25 pixels was selected based on the approximate average vegetation row length within the plots. From the seeding objects, pixel resizing was applied to extend seeding objects vertically onto a defined region (horizontal rows in green, Figure 3e). In addition, horizontal pixel resizing (insert/process/ pixel-based object resizing/mode = growing) was also applied to define plot width (plot size = horizontal row × plot width). As a result, all dense vegetation plots were located and classified in this step (dense plots in purple, Figure 3f). Coating regions on both sides of each dense plot were created. The coating region is a buffer region of the known plots within the horizontal row (using horizontal pixel resizing) with a defined distance (30 cm). The purpose of the coating region was to prevent any new estimated plot located too close to the existing plots or dense plots (Figure 3f).
Plots with sparse vegetation were interpolated from the dense plots. The pixel-based object resizing algorithm (hereafter pixel resizing), a built-in tool that allows us to grow or shrink objects based on pixel criteria and a defined number of cycles in the loop, was used. In this step, the dense plots were extended or grown vertically to the following horizontal row objects forming potential vertical rows (grow vertically 250 pixels, Figure 3g).
From non-horizontal rows, pixel resizing was applied horizontally to form interpolated rows (Figure 3h, in light grey). Finally, dense plots and interpolated rows were assigned to plots, and all other objects were assigned to the ground (Figure 3i). In short, plots with dense vegetation were detected using the image thresholding algorithm. Plots with sparse or no vegetation (also unknown plots) were located by the known plots with assumptions that the unknown plots have the same y coordinate with known plots (dense plots) and are located within horizontal rows.
Plot boundary and cover map exportationThe final steps included orthomosaic rotation back to its original angle using the synchronization tool. The orthogonalization algorithm also simplified plot boundary vertices or plot boundary map layers. The cover classification was also conducted to generate crop cover area using threshold segmentation to extract phenometric information. The resulting plot boundary layer and crop cover area map (classified map) were then exported into a shapefile.
A supplemental file that walks through the above outline steps being executed in eCognition, including screenshots of processing workflow for each step of the workflow was made available as a part of the manuscript. The eCognition rulesets of the workflow are also added to GitHub and can be accessed from here (
The workflow successfully detected and aligned plot boundaries for both dense and sparse/no vegetation plots precisely (Figure 4a and b). A total of 4,836 single-row plots were detected and segmented.
FIGURE 4. Plot boundary map of all the plots (a), vector map of the rotated plots (b), and plot boundary in real plot direction angle in simplified polygons (c)
The workflow was initiated with map rotation, and the rotated angle was calculated based on the input ROI file as −8.7o to the north. This angle was used as an input parameter in the map synchronization tool. Map rotation is essential as it helps in the detection of rows more precisely. As shown in Figure 4, the process turns the plot direction to North, which allows us to use pixel resizing in the following steps.
Horizontal row detection was based on vegetation density reflected in the line detection filter. Objects outside the horizontal row, including weeds and seeding errors, were eliminated. Notably, the local maxima algorithm in this automatic process allowed locating the approximated row center by using the maximum value from the vertical highlight. This allowed detection of the center of each horizontal row.
Based on the NDVI filter, the line detection filter highlighted vertical crop rows. Approximately 70% of the plot rows were highlighted. This step allowed us to overcome the difficulty of segmenting plots with uneven plot-to-plot spacing generated by seeding equipment errors or due to low seedling emergence and subsequent crop vegetation.
Vertical row detection was precisely located for the high vegetation cover. For low vegetation cover, plot locations were inferred from the high vegetation cover plots assuming that a vertical row in a horizontal row and a vertical row in the next horizontal row were on a straight line.
The classified objects were exported into a shapefile, including information about the object's size, location, and class name. It is worth noting that each plot boundary contains a polygon object with only four vertices (or 4 points at the plot corners, Figure 4c). This means that unnecessary pixelated vertices were eliminated, common in plot boundaries extracted from UAV imagery. This is important for visualization and data processing in geospatial analysis. Overlaying the plot boundary map over the RGB color map, the visual assessment confirmed all plots were lined up precisely (Figure 4a and b). The output map produced plot boundaries for all the plots.
The workflow developed in this study minimizes the need for manual inputs to adjust the plot boundary, given the distances between the plots (horizontal and vertical) are not always consistent. Alignment issues among orthomosaics from different imaging dates (time series data) can be overcome with this workflow. In the case of RGB color imagery, NDVI can be substituted with visible band vegetation indices such as excess green (Woebbecke et al., 1995) for canopy detection. We believe that the workflow is effective, especially in the early and mid-growing seasons with different spacing between the plots. However, during the later part of the growing season, factors such as overlapping canopies from adjacent plots or crop lodging may limit the effectiveness of the workflow. The workflow compiled here is in a ruleset, and precise plot boundary extraction was achieved on single-row plots, and therefore can be easily adapted to multiple row plots and designs. Although the workflow produced a high accuracy map, expert knowledge on contextual information of objects such as plot/crop row arrangement and vegetation index improve precision.
CONCLUSIONSThis study introduces a practical workflow for irregularly sized and spaced field plots extracted from UAV multispectral imagery. Vegetation index and field-based information were used to compile the workflow. Advanced tools in eCognition software were used in the plot extraction methodology. In addition, the workflow was compiled in a processing tree that allows knowledge-based intervention according to data-specific requirements. Using a simple convolutional filter (line filter), image thresholding, and pixel resizing, our approach avoided using complex algorithms such as a convolutional neural network or supervised training datasets. The current workflow focused on single-row lentil plots; however, the methodology can be extended to multi-row plots and imagery from other sensors with applications in a wide range of phenotyping studies.
ACKNOWLEDGMENTSFor this project, the authors would like to acknowledge the Plant Phenotyping & Imaging Research Center (P2IRC) and the Global Institute for Food Security (GIFS) at the University of Saskatchewan, SK, Canada grant support.
AUTHOR CONTRIBUTIONSThuan Ha: Data curation; Formal analysis; Methodology; Software; Writing – original draft. Hema Duddu: Data curation; Formal analysis; Project administration; Visualization; Writing – original draft. Albert Vandenberg: Conceptualization; Funding acquisition; Investigation; Supervision; Writing – review & editing. Steve Shirtliffe: Funding acquisition; Resources; Supervision; Writing – review & editing.
CONFLICTS OF INTERESTNo conflict of interest has been declared by the authors.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Advances in high-throughput platforms such as UAVs (unoccupied aerial vehicles) facilitate rapid image-based phenotypic data acquisition. However, existing plot-level data extraction methods are unreliable if field plots differ in size and spacing, as often occurs in early-generation plant breeding trials. To overcome the limitations of conventional plot extraction techniques, a combinational approach with both field-map information and image classification techniques can be used to optimize plot extraction. The objective of this study was to develop a plot boundary extraction workflow for irregularly sized and spaced field plots from UAV imagery using plot spacing data and vegetation index-based classifiers. An herbicide screening experiment consisting of three replications of 780 lentil (Lens culinaris Medik.) populations was foliar sprayed with saflufenacil. Aerial image acquisition was conducted during the peak vegetation stage using a RedEdge multispectral camera. A semi-automatic workflow was compiled in eCognition software to extract lentil plot boundaries. Normalized difference vegetation index (NDVI) was calculated to locate the plots with vegetation and those with low NDVI or no vegetation, and pixel resizing based on plot size and orientation was used to draw the plot boundary. The extraction results showed a precise estimation of plot boundary for all the plots with a wide range of herbicide damage, including the plots with complete loss of vegetation. By using a simple convolutional filter (line filter), image thresholding, and pixel resizing, this approach avoided the use of complex algorithm-based methodologies. Results suggest that this workflow can be extended to a wide range of phenotyping studies.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer