Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fusion of airborne multimodal point clouds for vegetation parameter correction extraction in burned areas

Open Access Open Access

Abstract

Most experimental studies use unimodal data for processing, the RGB image point cloud cannot separate the shrub and tree layers according to the visible vegetation index, and the airborne laser point cloud is difficult to distinguish between the ground and grass ranges, to address the above problems, a multi-band information image fusing the LiDAR point cloud and the RGB image point cloud is constructed. In this study, data collected from UAV platforms, including RGB image point clouds and laser point clouds, were used to construct a fine canopy height model (using laser point cloud data) and high-definition digital orthophotos (using image point cloud data), and the orthophotos were fused with a canopy height model (CHM) by selecting the Difference Enhancement Vegetation Index (DEVI) and Normalised Green-Blue Discrepancy Index (NGBDI) after comparing the accuracy of different indices. Morphological reconstruction of CHM + DEVI/NGBDI fusion image, remove unreasonable values; construct training samples, using classification regression tree algorithm, segmentation of the range of the burned areas and adaptive extraction of vegetation as trees, shrubs and grasslands, tree areas as foreground markers using the local maximum algorithm to detect the tree apexes, the non-tree areas are assigned to be the background markers, and the Watershed Transform is performed to obtain the segmentation contour; the original laser point cloud is divided into chunks according to the segmented single-tree contour, and the highest point is traversed to search for the highest point, and corrected for the height of the single-tree elevations one by one. Accuracy analysis of the vegetation information extracted by the method with the measured data showed that the improved method increased the overall recall rate by 4.1%, the overall precision rate by 3.7%, the overall accuracy F1 score by 3.9%, and the tree height accuracy by 8.8%, 1.4%, 1.7%, 6.4%, 1.8%, and 0.3%, respectively, in the six sampling plots. The effectiveness of the improved method is verified, while the higher the degree of vegetation mixing in the region the better the extraction effect of the improved algorithm.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Tree height, vegetation extent, etc. are extremely important vegetation information data and are often used in forest inversion [1], biomass estimation, etc. However, traditional methods of obtaining vegetation information are mostly field measurements with instruments such as altimeters, which are labour-intensive and cannot cope with large forest areas. As the payload capacity of unmanned aerial vehicles (UAV) increases, they are then equipped with visible light cameras that can acquire high-precision images containing two-dimensional surface information, such as the texture of the object, its spectra and the topological relationships between objects, to accurately identify the type of feature [2,3]; Equipped with LiDAR systems that can penetrate the vegetation canopy to acquire the surface and understory topography of the vegetation canopy, thus providing rapid access to deep information such as vegetation tree height and diameter at breast height [46].

The UAV image data generates a fine-grained digital orthophoto (DOM), which is segmented using the Visible Vegetation Index to obtain accurate vegetation extent. Among them, Wang et al. [7] constructed the visible band difference vegetation index based on the principle of normalised vegetation index, and the results showed that the accuracy of the index extraction could reach more than 90%; Zhou et al. [8] proposed a Differential Enhanced Vegetation Index (DEVI) for urban areas with a large proportion of green vegetation, which enhances the property that the reflectance of the green band of green vegetation is greater than that of the red and blue bands at the same time; Shen et al. [9] combined multispectral data with RGB imagery to estimate forest structural attributes, but it was difficult to distinguish between spectrally similar target features due to mixed vegetation crops.

Tree height is mainly obtained by constructing a canopy height model (CHM), and the accuracy of the model is particularly important. Although UAV image data can generate a dense point cloud containing the spatial location and height characteristics of various features, Yu et al. [10] demonstrated the effect of single-tree segmentation of cedar through UAV image data, and its accuracy is poor in the high depression forest area, which needs to be combined with LiDAR point cloud to further improve the segmentation accuracy. While Yan et al. [11] extracted a vegetation canopy height model from UAV LiDAR data, which can easily obtain accurate spatial information, and even in dense forested areas, they can penetrate the branches and leaves to obtain part of the understory topography, and generate a canopy height model with less error. Nie et al. [12] extracted accurate canopy models from LiDAR point clouds to analyse the degree of aberrations in CHM at different slopes and canopy shapes, and identified the need for fine CHM.

However, there are limitations in extracting vegetation information from visible images, which can only provide spectral texture information on the vegetation surface, making it difficult to differentiate vertical vegetation structures under the same topographic conditions, and the spectral signals may be saturated [13]; The generated CHM has an accurate spatial structure, but is susceptible to the low vegetation that adheres to the ground, reducing the accuracy of the final extracted vegetation information [14]. To address the above problems, Luo et al. [15] combined colour and elevation information from UAV images to subdivide vegetation, bare ground, tailing sand and water samples from an open phosphate mining area near Dianchi Lake, which can overcome the limitations of vegetation extraction but the relative lack of precision; Lu et al. [16] fused multimodal data from different UAV platforms separately to invert the citrus leaf area index through the complementary effect of data, which effectively improved the experimental results. Based on such ideas, this paper combines the advantages of the two data sources, fuses CHM with visible vegetation index, constructs CHM + DEVI and CHM + NGBDI images with colour information and spatial structure, and performs adaptive segmentation of the vertical structure and overfire range of the vegetation through the classification regression tree algorithm (CART) [17], and carries out single-tree segmentation targeting the arboreal regions therein.

Watershed algorithm is the most common algorithm for mono-wood segmentation, and Meyer et al. [18] first proposed mark-controlled watered segmentation (MCWS) in 1990 to avoid over-segmentation of images by noise; Ma et al. [19] used morphological open and closed reconstruction to remove noise points from the image and correct unreasonable values to further weaken the over-segmentation phenomenon. The disadvantage of MCWS is that the image segmentation effect is closely related to marker selection, so accurate marker selection is especially important. Xu et al. [20] used Gaussian filtering to smooth the image, and extracted the block region distinguished from seed points by adaptive threshold segmentation algorithm to obtain a more accurate labelling range and improve the algorithm accuracy; Xu et al. [21] modified the local maximum algorithm as a way to obtain more reasonable extraction markers, which ultimately improved the individual canopy detection accuracy.

The fused image mentioned above was utilized to enhance the Watershed algorithm. In the MCWS algorithm, the inverted watershed dams in the construction process are influenced by water pools formed by surrounding vegetation. Consequently, the extraction of information regarding tall trees can be disrupted by shrubs, and shrubs can, in turn, be influenced by lower vegetation layers in their vertical structure. In other words, the presence of mixed vegetation can impact the accuracy of the final elevation extraction. To mitigate the influence of mixed vegetation, a series of improvement steps were undertaken. Initially, the morphological reconstruction algorithm was applied to correct the fused images. Subsequently, training samples were constructed, and the CART algorithm was employed to delineate the burned areas and further classify vegetation into trees, shrubs, and grassland. After extracting the tree-covered regions, a local maximum algorithm [22,23] was applied for labeling, aiming to enhance the accuracy of the Watershed algorithm by improving the precision of the labeled selection region. The conventional CHM may result in displaced tree tops; thus, the original laser point cloud data was used to rectify elevation errors, serving as the basis for the final extraction of individual tree heights [24]. Six sample areas were identified to compare the segmentation accuracy of the four single-timber segmentation algorithms, and the extracted vegetation information was verified with the measured data for accuracy, which proved that the algorithms were able to further extract the single-timber under the premise of stripping off the influence of mixed vegetation and obtaining more accurate vegetation information.

2. Materials

2.1 Study area

The experimental site is located near Yunmeng Mountain in Xin'an County, Luoyang City, Henan Province, China. The selected area encompasses a combination of natural and artificially planted forests. On July 20, 2022, data was collected using a hexacopter UAV equipped with the RIEGL VUX-1 laser scanning system. The UAV flew at an altitude of 200 meters with a 70% sidelap. Simultaneously, a quadcopter UAV equipped with a high-resolution digital camera captured remote sensing images of the same area. The camera operated in oblique photography mode at an altitude of 150 meters with 80% overlap in both the along-track and across-track directions, resulting in 259 RGB images. Initial data processing yielded two types of point cloud data for the experimental area: a LiDAR point cloud with a density of 112 points per square meter and an image-based point cloud with a density of 276 points per square meter, as shown in Fig. 1. The corresponding plots are also annotated in the figure.

 figure: Fig. 1.

Fig. 1. Overview of the study area. (a) Geographic location of the study area; (b) RGB point cloud; (c) LiDAR point cloud.

Download Full Size | PDF

After registering the two types of point cloud data using the iterative closest point (ICP) algorithm [25], the LiDAR point cloud underwent cloth simulation filtering algorithm [26] to derive the ground point cloud in the study area. The irregular triangle mesh algorithm was employed to construct a digital elevation model (DEM). Utilizing the laser's single return, which only captures surface information, a digital surface model (DSM) was created. The crown height model was obtained by subtracting the DSM from the DEM. Simultaneously, a high-resolution DOM was generated from the image-based point cloud. Using RGB information, the visible light color indices (DEVI and NGBDI) were computed. In data preprocessing, the accuracy of the generated models was maintained at 25 cm. However, it's worth noting that lower spatial resolution may lead to over-segmentation with the Watershed algorithm, while higher resolution may result in under-segmentation issues.

2.2 Ground-truthing data

Ground-level measurements and airborne data collection were conducted simultaneously, with tree heights measured using a handheld hypsometer. GPS was utilized to measure the individual tree positions within the sample plots, which were then manually marked. Six sample plots were chosen based on vegetation distribution and wildfire burn marks, with five located in natural forest areas and one in an artificially planted region. Sample plots 1, 2, and 3, as illustrated in Fig. 2, displayed their horizontal distribution from high-resolution photos, segmented based on their relative positions to the burned areas. Sample plots 4, 5, and 6 utilized a simultaneous localization and mapping (SLAM) scanner to capture the actual topography of the plots in real-time. This data, displayed in Fig. 3, depicted the vertical distribution of vegetation based on elevation. Considering the vegetation distribution characteristics across the six sample plots, it is evident that sample plots 1 (Fig. 2(a)) and 4 (Fig. 3(a)) exhibit a high degree of vegetation mixing. Sample plots 3 (Fig. 2(c)) and 5 (Fig. 3(b)) show a moderate level of vegetation mixing, while sample plots 2 (Fig. 2(b)) and 6 (Fig. 3(c)) display the lowest degree of vegetation mixing.

 figure: Fig. 2.

Fig. 2. Photo level showing extent of vegetation. (a) Sample 1 is located at the boundary of the burned areas; (b) Sample 2 is located inside the burned areas; (c) Sample 3 is located outside the burned areas.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. SLAM elevation displays vegetation hierarchy. (a) Sample 4: high degree of mixed vegetation. (b) Sample 5: medium mixed vegetation; (c) Sample 6: low degree of mixed vegetation.

Download Full Size | PDF

Sample Plot 1 comprises a total of 115 wild trees, predominantly oak and small pine trees. Approximately half of the area is within the burn zone, where surviving vegetation intertwines with new growth, resulting in a high degree of vegetation intermixing. The average tree height is around 4.5 meters. Sample Plot 2 is situated within the burn area and hosts 57 charred oak trees, exhibiting a single tree species with a low level of vegetation mixing. The average tree height is approximately 5.6 meters. In Sample Plot 3, there are 89 wild trees, primarily oak, with slight traces of wildfire erosion along the boundary. The crown width varies significantly between the center and the edges, presenting a moderate degree of vegetation mixing. The average tree height is around 7.9 meters. Sample Plot 4 features 153 small trees with evident signs of artificial planting, mainly pear and privet trees. The average tree height is about 2.2 meters, and the vegetation within the area exhibits a high degree of mixing. Sample Plot 5 contains 73 trees, including mostly oak and some hawthorn. Low-lying vegetation is distributed between the trees, resulting in a moderate degree of mixing. The average tree height is around 7.6 meters. Sample Plot 6, located near the mountain, is dominated by oak trees, totaling 162 trees. Only a few shrubs and grassy areas are present along the road boundaries, resulting in a low degree of vegetation mixing. The average tree height is approximately 8.5 meters.

3. Methods

Vegetation information obtained solely through LiDAR point clouds or vegetation indices has inherent limitations. In areas with mixed vegetation, significant accuracy errors are present, making their practical application challenging. This paper addresses these limitations by leveraging the unique advantages of both datasets to create a fused dataset that incorporates both color information and spatial structure. This fused dataset is then used for vegetation extraction, and the method and process are illustrated in Fig. 4.

 figure: Fig. 4.

Fig. 4. Overall flow chart.

Download Full Size | PDF

Firstly, the airborne LiDAR point cloud underwent ground point cloud extraction using a cloth simulation filtering algorithm. An irregular triangle mesh was then generated to create a DEM. Combining the DEM with the digital surface model generated from the laser's single return, a CHM containing spatial information was constructed. High-precision DOM were obtained from airborne image data. Visible light color indices were computed, and after comparing different index accuracies, DEVI (Enhanced Vegetation Greenness Index) and NGBDI (normalized burned ratio for grayish black soil) were selected to calculate index maps incorporating color information. Subsequently, the CHM was fused with DEVI and NGBDI separately, generating images (CHM + DEVI and CHM + NGBDI) containing both spatial structure and color information. These fused images were used to enhance the Watershed algorithm for segmentation. Morphological reconstruction was applied after image fusion to eliminate small protrusions and smooth out merged irregularities. Training samples were established, and a CART algorithm was employed for adaptive extraction of vegetation types (trees, shrubs, and grassland) within the burned area. For tree regions, a local maximum algorithm detected tree canopy tops, serving as foreground markers, while non-tree regions were designated as background markers. The marker image underwent Watershed transformation to obtain individual tree contours. Finally, the original point cloud was used to refine the offset of tree canopy tops, resulting in the ultimate individual tree elevations. To assess the accuracy of vegetation information estimation, the number of trees and tree heights extracted using this method were analyzed against ground truth measurements separately.

3.1 Visible light index selection

The visible light index refers to the combination of two or more bands within the visible light range of remote sensing images, amplifying differences between various land cover types to effectively differentiate between them. Among existing indices, most involve the combination of bands in the visible and near-infrared ranges. These indices include, but are not limited to, drought or carbon depletion indices, narrow-band greenness indices, broad-band greenness indices, canopy nitrogen indices, light use efficiency indices, canopy water content indices, and chlorophyll indices. The Visible Light Vegetation Index, for instance, leverages the spectral reflection characteristics of healthy green vegetation, where the reflectance in the green band is simultaneously greater than that in the red and blue bands. This feature makes it easier to process RGB images that are more readily available. However, vegetation indices constructed solely based on the visible light bands are relatively limited, each with different applications. Some formulas for calculating certain visible light indices are presented in Table 1.

Tables Icon

Table 1. Visible light index

The high-resolution digital orthophoto image obtained from drone imagery includes precise RGB tricolor bands. The right half of the study area primarily focuses on the segmentation of vegetation based on visible light indices, while the left half is primarily dedicated to analysing the impact of wildfires. Utilizing the visible light vegetation index formula from Table 1, calculations were performed in ENVI, resulting in various index outcomes, as depicted in Figs. 5(a), (b), (c), (d), (e), and (f).

 figure: Fig. 5.

Fig. 5. Visible light index. Vegetation discrimination: (a) DEVI; (b) RGRI; (c) VDVI; (d) NGRDI. Burned areas discrimination: (e) NGBDI; (f) GBRI.

Download Full Size | PDF

A histogram was constructed based on the corresponding visible light indices, and an accurate threshold was determined using a bimodal method, as shown in Fig. 6. The bimodal histogram thresholding method is rooted in a global comparative segmentation approach.

 figure: Fig. 6.

Fig. 6. Visible light index bimodal threshold method.

Download Full Size | PDF

Through index calculations, it accentuates the contrast between green vegetation and non-green vegetation in the study area. The resulting index histogram exhibits a bimodal shape, with one peak representing green vegetation and the other peak representing non-green vegetation. The valley between these two peaks marks the boundary between green and non-green vegetation and is the location where the threshold occurs. This method is utilized to determine threshold values for various vegetation indices, and the extraction of burned areas follows a similar process.

Simultaneously, employing a human-machine interactive approach, the entire study area was divided into two parts. In Study Area 1, the image was pixelated to distinguish between the burned and unburned regions. The pixel count for the burned area was 72,444, while the non-vegetated area had 229,758 pixels. In Study Area 2, the image was pixelated to distinguish between vegetation and non-vegetation. The pixel count for the vegetation area was 327,728, and the non-vegetation area had 67,200 pixels. The accuracy evaluation of the extracted vegetation areas is detailed in Table 2.

Tables Icon

Table 2. Accuracy report of visible light index

Histograms were constructed based on corresponding vegetation indices, and a bimodal method was employed to determine accurate thresholds for image segmentation, separating the image into vegetation and non-vegetation areas, as well as burnt and non-burnt areas, as shown in Figs. 7(a), (b), (c), (d), (e), (f). According to Table 2, the statistical histograms for MGRVI and EXG did not exhibit bimodal characteristics, making it impractical to determine thresholds using the bimodal method. The dashed regions in Fig. 7, identified by RGRI and NGRDI, showed instances of over-segmentation in larger areas, while VDVI and GBRI exhibited under-segmentation in smaller areas. Analysis revealed that, for vegetation extraction, in comparison to the other indices, DEVI and VDVI formed well-defined bimodal histograms. DEVI, particularly, demonstrated enhanced sensitivity to green vegetation due to its capability to simultaneously amplify green band reflectance while surpassing red and blue band reflectance. Given the data collection period during the summer with abundant green vegetation, DEVI was selected for vegetation extraction, yielding the best results with a more easily determined bimodal histogram threshold range, maintained between 0.9 and 1. On the other hand, for extracting wildfire areas, NGBDI proved highly sensitive to the gray-black soil post-burn, achieving the highest accuracy. This index can be employed as a subsequent algorithm for extracting wildfire extents.

 figure: Fig. 7.

Fig. 7. Visualisation of visible light index accuracy. Vegetation discrimination: (a) DEVI; (b) RGRI; (c) VDVI; (d) NGRDI. Burned areas discrimination: (e) NGBDI; (f) GBRI.

Download Full Size | PDF

3.2 CHM + DEVI/NGBDI image fusions

During data preprocessing, the challenge arose from the fact that the airborne laser data and image data originated from different unmanned aerial systems, making initial setup difficult to standardize. The solution involved performing aerial triangulation on the UAV images, generating a substantial and dense point cloud. To achieve alignment, an iterative closest point (ICP) algorithm was applied, allowing the UAV image point cloud to be registered with the laser point cloud. This involved iterative transformations in three-dimensional space, including rotation and translation, to minimize errors and obtain optimal registration results. Point clouds serve as the foundation for subsequent digital products, and achieving accurate spatial and geographic coordinates for matched point cloud data helps reduce precision errors. This alignment ensures consistent spatial three-dimensional coordinates and resolution for the generated CHM and DEVI/NGBDI images.

Integration Approach: The vectorized extraction results obtained from the computation of the Visible Vegetation Index on RGB images underwent morphological reconstruction to eliminate unreasonable values. In MATLAB, these results were intersected with the vector data obtained from the segmentation of laser point cloud data through CHM, resulting in a fully fused CHM + DEVI and CHM + NGBDI images.

The fusion effects are compared in Figs. 8 and 9. In the high-resolution images, accurate delineation of grassland and tree areas was achieved through interactive human-machine interaction. The delineation was overlaid on the DEVI index, CHM, and fused images. Figure 8 clearly shows that in the area affected by wildfire, some vegetation reflects lower reflectance in the green band due to burning, presenting a gray-black color similar to the ground. This difference is challenging to discern in image maps or visually. However, the fused CHM + NGBDI image, leveraging the elevation of the burned trees in CHM, accurately delineates the boundary between the ground and vegetation in that burned area.

 figure: Fig. 8.

Fig. 8. Fusion effect comparison chart. (a) Range of arborvitae shown on high-resolution photographs; (b) Comparing correct and incorrect tree ranges on NGBDI; (c) Range of arborvitae shown on CHM; (d) Range of arborvitae shown on CHM + NGBDI.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Fusion effect comparison chart. (a) Correct grass range shown on DEVI; (b) Comparison of correct and incorrect grass ranges on CHM; (c) Correct grass range shown on CHM + DEVI; (d) Unable to show correct arbor range on DEVI; (e) Range of arborvitae shown on CHM; (f) Range of arborvitae shown on CHM + DEVI.

Download Full Size | PDF

Similarly, in the vertical stratification of vegetation in forested areas, the DEVI vegetation index extraction results cannot distinguish between the tree layer, shrub layer, and grassland, as all three exhibit the same grayscale (Fig. 9(d)). The accurate forest floor elevation information in CHM effectively separates the vertical vegetation structure (Fig. 9(e)). The fused CHM + DEVI image distinguishes the vegetation tree layer, shrub layer, and grassland (Fig. 9(f)), accurately outlining the contours of individual tree canopies and presenting results that align more closely with the real topography of the forested area.

On the other hand, in the DEVI index map, distinguishing between bare ground and vegetation is straightforward, allowing for the precise separation of vegetation and ground areas (Fig. 9(a)). However, in CHM, spatial information containing variations in terrain slope can be confused with grassland elevation changes (as indicated by the dashed line in Fig. 9(b)), causing some grassland areas to be misjudged as the ground and reducing the accuracy of vegetation extraction in that region. The fused CHM + DEVI image significantly increases the grassland area (Fig. 9(c)), enhancing ground segmentation accuracy.

3.3 Classification regression tree adaptive extraction

The processed fused images underwent further treatment to construct a training sample set. CART were employed for computation, and in different experimental areas, the decision tree models constructed exhibited adaptive variations to better align with the topographical and land cover conditions of the corresponding forest regions. The CART algorithm, introduced by Breiman in 1984, employs a binary recursive partitioning approach, iteratively constructing a binary decision tree. The algorithm utilizes the Gini coefficient as a metric for branching attributes. Based on the Gini coefficient, the algorithm recursively bifurcates the unclassified training sample set into two branches at each split, forming a node and two branches with each iteration. This process continues until the current set of samples to be classified is designated as a leaf node or satisfies the criteria for stopping the split. The result is a concise and clear decision tree model.

Let S be a sample set of size m and categorical attributes n, which is used to define n different categorisations Ci (i = 1, 2, …, n), the Gini coefficients are calculated as:

$$Gini(S) = 1 - \sum\limits_{i = 1}^n {p_i^2,{p_i} = \frac{{|{{C_i}} |}}{S}}$$

For the sample set S, the attribute H is selected as the branching condition, and the sample set S is split into a subsample set S1 of the condition H, and a sample set S2 composed with the rest of the samples, with the condition Gini coefficients:

$$Gini(S|H) = \frac{{|{{S_1}} |}}{{|S |}}Gini({S_1}) + \frac{{|{{S_2}} |}}{{|S |}}Gini({S_2})$$

The Gini gain coefficient indicates the extent to which information uncertainty is reduced under a condition where the attribute with the largest gain coefficient is used as the decision tree root node attribute with the formula:

$$\Delta Gini(H) = Gini(S) - Gini(S|H)$$

3.4 Marking watershed algorithm improvements

The direct application of the Watershed algorithm often leads to over-segmentation, especially in the case of fused images where the noise level is significantly higher after overlaying the two images. Therefore, it is essential to employ image denoising algorithms. Experimental findings indicate that conventional algorithms mostly remove some noise but exhibit suboptimal performance in addressing the irrational values generated after fusion. In this study, morphological opening and closing reconstruction operations were applied to process the data, eliminating noise and correcting extreme values within regions. The reconstruction-based opening operation effectively removes small spikes and connections between tree crowns, while the reconstruction-based closing operation fills in small pixel holes.

The geodesic expansion $D_G^1(P)$ and geodesic erosion $E_G^1(P)$ of the labelled image P of size 1 with respect to the template image G are defined as:

$$\left\{ {\begin{array}{{c}} {D_G^1(P) = (P \oplus B) \cap G}\\ {E_G^1(P) = (P\Theta B) \cup G} \end{array}} \right.$$
${\oplus}$ is the expansion operation, $\Theta $ is the corrosion operation, and B is the structural element.

The geodesic expansion $D_G^n(P)$ and geodesic erosion $E_G^n(P)$ of the labelled image P of size n with respect to the template image G are defined as:

$$\left\{ {\begin{array}{{c}} {D_G^n(P) = D_G^1(D_G^{n - 1}(P))}\\ {E_G^n(P) = E_G^1(E_G^{n - 1}(P))} \end{array}} \right.$$

The morphological reconstruction of the expansion of the template image G from the labelled image P is denoted as $R_G^D(P)$, and the morphological reconstruction of the corrosion is denoted as $R_G^E(P)$, and the initial image of a definite size will converge and stabilise after k iterations after geodesic expansion and geodesic corrosion. The formula is:

$$\left\{ {\begin{array}{{c}} {R_G^D(P) = D_G^k(P)}\\ {R_G^E(P) = E_G^k(P)} \end{array}} \right.$$

Morphological open and closed reconstruction operations mainly use the original image as the template image, corrode or expand the original image, use the processed image as the marker image, and finally use the marker image to reconstruct with the template image. The open operation reconstruction $O_R^n(P)$ is corrosion followed by expansion, and the closed operation reconstruction $C_R^n(P)$ is expansion followed by corrosion, and the expression is as follows:

$$\left\{ {\begin{array}{{c}} {O_R^n(P) = R_P^D[{P\Theta mB} ]}\\ {C_R^n(P) = R_P^E[{P \oplus mB} ]} \end{array}} \right.$$
m is the number of iterations of structural unit B over image P.

The Watershed algorithm operates by recognizing subtle variations in the grayscale of images to perform individual tree segmentation. The primary principle involves inverting the grayscale values of each pixel, where each pixel value corresponds to the elevation in the terrain. This process transforms local maxima into local minima, initiating a flooding simulation starting from the minimum values. As the water level rises, adjacent basins are formed, and dams are constructed at critical points, representing the contours of individual trees. The marker-controlled watershed method transforms automatically detected local minima in the watershed into fixed values, facilitating the watershed transformation and eliminating false crown points, reducing over-segmentation for a more accurate tree segmentation.

The improved segmentation algorithm enhances the marker-controlled watershed method using fused images of CHM + DEVI and CHM + NGBDI, as depicted in the algorithm flowchart in Fig. 10. After image fusion, morphological reconstruction is applied based on reconstructed morphological opening and closing operations to remove image noise and correct irrational values. Subsequently, a classification regression tree algorithm is employed to build a training sample set using the fused image containing both color and elevation information. This set is used to segment ground areas and adaptively extract vegetation into trees, shrubs, and grassland. Finally, in the tree area, a local maximum algorithm is applied to detect tree vertices, serving as foreground markers. Non-vegetation areas are designated as background markers, and a watershed transformation is executed using the marked image. The algorithm is primarily implemented using MATLAB, with the labeled image set as a unit8-bit image, foreground markers assigned a value of 255, and background markers set to 0, followed by the execution of the watershed transformation.

 figure: Fig. 10.

Fig. 10. Improvement process for the marker control watershed algorithm.

Download Full Size | PDF

3.5 Point cloud correction for offset tree height

Due to mutual occlusion of tree crowns in dense forest areas, generating a CHM may result in some voids, characterized by depressions where the height is lower than the surrounding points. Additionally, CHM can exhibit distortions under certain conditions, impacting the accuracy of tree height extraction. To enhance the final accuracy of tree height extraction, it is imperative to mitigate the precision errors introduced by CHM.

During the data processing phase, the canopy models may exhibit voids and distortions. However, the original laser point cloud data, untouched by operations such as rasterization and other dimensionality reduction processes, does not suffer from data distortion, representing the true elevation. By refining the Watershed algorithm, accurate individual tree outlines are obtained. The point cloud is then segmented based on these outlines, and by traversing each segmented block, the normalized coordinates of the highest point are determined, providing the elevation of each individual tree.

Nie et al. [12] investigated the distortions in CHM. When the ground slope angle is less than the tree crown angle, there is no displacement in the tree top position for conical-shaped crowns. However, when the ground slope angle exceeds the tree crown angle, deviations in the tree top points in the CHM occur, leading to errors in tree height measurements. Even in scenarios with relatively small terrain slopes, spherical and elliptical crown shapes exhibit tree top displacement. Therefore, under most terrain conditions, CHMs constructed from point clouds inherently contain errors in tree height accuracy. Figure 11 illustrates the relationship between slope angle ($\mathrm{\theta }$) and crown angle ($\mathrm{\lambda }$) for conical crown shapes, where $\mathrm{\theta }$ represents the ground slope angle, $\mathrm{\lambda }$ is the crown angle, and the x-coordinate is measured from the tree root point.

$$\tan (\theta ) = \frac{{H - L}}{R}$$
$$H(x) = H + x((\frac{{H - L}}{R}) - \tan (\lambda ))$$
$$H(x) = H + x(\tan (\theta ) - \tan (\lambda ))\begin{array}{{cc}} {}&{within - R} \end{array} \le x \le 0$$
$$\left\{ \begin{array}{l} {x_{\max }} = 0\\ {H_{\max }} = H\\ \Delta H = {H_{\max }} - H = 0 \end{array} \right.if\textrm{ }\theta \ge \lambda$$
$$\left\{ \begin{array}{l} {x_{\max }} = -R\\ {H_{\max }} = H-R(\textrm{tan}(\theta) - \textrm{tan}(\lambda))\\ \Delta H = {H_{\max }} - H = R(\textrm{tan}(\lambda) - \textrm{tan}(\theta)) \end{array} \;\;\;\;\;\right.if\textrm{ }\theta \le \lambda$$

 figure: Fig. 11.

Fig. 11. Schematic diagram of the relationship between slope angle and canopy angle. (a) Accurate tree apex position when canopy angle is greater than slope angle; (b) Tree apex position in CHM when canopy angle is greater than slope angle; (c) Accurate tree apex location when canopy angle is less than slope angle; (d) Tree apex position in CHM when canopy angle is less than slope angle.

Download Full Size | PDF

In the equation, ${x_{\max }}$ represents the x-coordinate where the local function $H(x)$ is maximized, and ${H_{\max }}$ denotes the local maximum value of $H(x)$, signifying the tree height derived from the CHM. As per the equation, the horizontal displacement offset is always less than the vegetation canopy radius R. Consequently, the tree's apex from the original point cloud consistently resides within the contour of the canopy, as illustrated in Fig. 12.

 figure: Fig. 12.

Fig. 12. Horizontal offset of tree vertices in the CHM. (a) No offset (correct tree vertex position); (b) small amount of bias; (c) Maximum offset.

Download Full Size | PDF

Tree height is defined as the vertical distance between the tree canopy's highest point and the ground. Even if there is any displacement in any direction at any distance from the CHM tree canopy, it will necessarily fall within the corresponding contour. By traversing the original point cloud to locate and extract tree canopy points, coupled with the elevation interpolation from the generated DEM at corresponding positions, tree height is determined. This process corrects for any offset errors in the CHM, as illustrated in Fig. 13.

 figure: Fig. 13.

Fig. 13. Raw laser point cloud correction CHM error offset process. (a) Extracted single wood profile range; (b) Extracted mono-wood point cloud blocks; (c) Corrected tree vertices.

Download Full Size | PDF

4. Results

4.1 Adaptive extraction results based on training samples

The CART is a typical supervised classification algorithm, and the selection of representative and typical samples plays a crucial role in determining the quality of training samples within the study area. The study area, situated near the foothills, primarily comprises sporadic low-rise structures and a few artificial fish ponds, which are excluded from the classification.

The surface vegetation in the region is diverse, including samples from forest paths and some open spaces within the forest, with exposed rock and gravel areas on hills and mountains. Grasslands are distributed along roadsides, forest edges, and internal clearings, while shrubs encompass various vegetation types, making them complex and challenging to distinguish but widespread in distribution. The dominant tree species, such as regional oak and hawthorn, are found in dense forested areas. By interpreting the distribution of these features, the study area was divided into two categories: burned and non-burned regions in Study Area 1 and four classes in Study Area 2 (trees, shrubs, grassland, and bare ground). In this classification system, the study processes two fused images, CHM + NGBDI and CHM + DEVI, using a human-machine interactive approach to select 163 sample regions as training objects. The chosen samples are evenly distributed throughout the study area and represent symbolic regions for each category.

The adaptively extracted segmentation results from the training samples are depicted in Fig. 14(b) and (d). Different land cover categories are identified by distinct colors, clearly displaying the extent of the gray-black land areas following the wildfire. Within this area, various-sized voids accurately represent the precise removal of vegetation in the region. Concerning vegetation segmentation, in the horizontal direction, deciduous trees are predominantly distributed in dense forested areas, while grasslands thrive on both sides of roads and at the forest edges. Analyzing the vertical structure reveals a vegetation arrangement from low to high, with grasslands enclosing shrub layers leading up to the canopy of deciduous trees. The clear outlines of the canopy widths of the deciduous trees are visible, and the overall results align with the topographical growth conditions of the study area.

 figure: Fig. 14.

Fig. 14. Classification results. (a) CHM + NGBDI; (b) Extraction results for burned areas and unburned areas; (c) CHM + DEVI; (d) Trees, shrubs, grasses and ground extraction results.

Download Full Size | PDF

4.2 Verification of single wood segmentation accuracy

After adaptively extracting the study area's deciduous trees, shrubs, grassland, and ground, the Watershed algorithm was employed with constraints from identified tree areas to delineate individual tree outlines. Subsequently, the original laser point cloud was segmented based on these outlines. The segmented point clouds of individual trees were superimposed and displayed, as shown in Fig. 15, representing the segmentation results for the point clouds of the six sample plots in the study area. The effectiveness of the segmentation is clearly visible. For each segmented point cloud block, the highest point within the elevation index block was identified as the true treetop point. Subtracting this from the elevation-interpolated DEM provided an accurate representation of vegetation height.

 figure: Fig. 15.

Fig. 15. Split point cloud block overlay display. (a) Sample 1; (b) Sample 2; (c) Sample 3; (d) Sample 4; (e) Sample 5; (f) Sample 6.

Download Full Size | PDF

The accuracy assessment of single tree segmentation precision was conducted by comparing the tree counts from the six sample plots with the ground-truth data. Precision evaluation was performed using specific metrics, namely recall rate (R), precision rate (P), and overall accuracy (F1 score), to assess the accuracy of individual tree segmentation [33,34], and the calculation formula was as follows:

$$R = \frac{{{T_P}}}{{{T_P} + {F_N}}} \times 100\%$$
$$P = \frac{{{T_P}}}{{{T_P} + {F_P}}} \times 100\%$$
$$F1 = \frac{{2RP}}{{R + P}} \times 100\%$$

In the formula: TP represents the number of fruit trees correctly detected; FN is the number of fruit trees not detected; FP is the number of erroneously detected fruit trees.

The precision evaluation results for both the original Watershed algorithm and the improved algorithm are presented in Tables 3 and 4. For sample plots 1 through 6, the recall rate (R) increased by 8.7%, 3.5%, 3.3%, 4.6%, 4.2%, and 1.3%, respectively, while the precision rate (P) increased by 4.8%, 0.2%, 3.5%, 6.3%, 4.3%, and 1.8%, and the F1 score increased by 6.7%, 1.9%, 3.4%, 5.5%, 4.2%, and 1.6%, respectively. Overall, the recall rate (R) increased by 4.1%, precision rate (P) increased by 3.7%, and the F1 score increased by 3.9%. Analysis revealed that the improvement in single-tree segmentation accuracy is related to the degree of tree complexity in the sample area. Sample plots 3 and 5, with similar tree complexities, showed close improvement effects in F1 scores. Sample Plot 3, with a minor portion of wildfire remnants, had a slight impact on the growth changes of trees in the region. Sample Plot 1, with normally growing trees mixed with post-fire regrowth, exhibited a significant improvement in single-tree segmentation after excluding the disturbance from the burned remnants. Sample Plot 4, mainly composed of artificial vegetation with larger tree spacing, had a better single-tree extraction effect after eliminating interference from other mixed vegetation. However, the accuracy improvement for sample plots 2 and 6 was less pronounced. Sample Plot 2, located at the center of the burned area with sparse and less complex vegetation, primarily consisted of surviving single tree species, resulting in less noticeable enhancement. Sample Plot 6, with dense and less complex vegetation, posed greater challenges in segmentation under similar conditions, making it difficult to alleviate misclassification issues. It can be observed that the improved algorithm has limitations in enhancing segmentation effectiveness under single-species conditions.

Tables Icon

Table 3. Accuracy evaluation of single tree segmentation in MCWS algorithm

Tables Icon

Table 4. Accuracy evaluation of single tree segmentation based on improved algorithm

To objectively and clearly assess the strengths and weaknesses of the improved algorithm, the segmentation results of this algorithm were compared and analyzed against three other tested single-tree segmentation algorithms, namely, the MCWS algorithm, point cloud distance clustering algorithm [35], and deep learning algorithm [36], as depicted in Fig. 16. The analysis revealed that, among the six sample plots, the segmentation accuracy ranking was as follows: deep learning algorithm = improved algorithm > MCWS algorithm > point cloud distance clustering algorithm. The segmentation accuracy of the improved algorithm was generally similar to that of the deep learning algorithm. However, the precision of the deep learning algorithm is influenced by the training samples, requiring the manual selection of a substantial number of representative deciduous tree samples. In cases where there are numerous interfering factors in the sample plots, such as high vegetation complexity in sample plots 1 and 4, the accuracy of the deep learning algorithm is lower than that of the improved algorithm. Similarly, in sample plots 2 and 6 with low vegetation complexity, where a single tree species facilitates deep learning training segmentation, the accuracy is superior to that of the improved algorithm.

 figure: Fig. 16.

Fig. 16. Comparison of multiple single tree segmentation algorithms.

Download Full Size | PDF

4.3 Vegetation height accuracy validation

The crown width contours obtained from individual tree segmentation were used to correct the displaced tree tops in the CHM using point cloud data. This correction facilitated the extraction of individual tree heights. Finally, an accuracy analysis of vegetation information was conducted by comparing the measured tree height (H) with the extracted height (h). The average elevation accuracy verification results for both measured and extracted values were calculated separately for each study area, $\Delta H$:

$$\Delta H = \left( {1 - \frac{{|{H - h} |}}{H}} \right) \times 100\%$$

The comparative analysis results of vegetation information extraction accuracy are depicted in Fig. 17. The refinement of the Watershed algorithm showed improvements in tree height extraction accuracy for Sample plots 1, 2, 3, 4, 5, and 6 by 8.8%, 1.4%, 1.7%, 6.4%, 1.8%, and 0.3%, respectively. The analysis revealed that the enhanced algorithm's accuracy improvement was associated with the degree of complexity in each sample plot, including the mixture of trees, shrubs, and grass. Sample plots 3 and 5, with similar degrees of tree and shrub mixtures, exhibited comparable extraction results. Notably, Sample Plot 1 had newly grown vegetation surrounding varying-height trees, causing significant interference in the accurate extraction of tree height. Sample Plot 4 displayed uneven vertical vegetation distribution, with artificially planted vegetation at a lower height, making it more susceptible to interference from other vegetation layers. Therefore, the improved extraction results were more pronounced in these two areas. Sample Plot 2, with charred oak trees of overall lower height, and Sample Plot 6, with dense vegetation cover and low mixture of trees, shrubs, and grass, showed limited improvement in extraction accuracy in these regions.

 figure: Fig. 17.

Fig. 17. Comparative accuracy evaluation of vegetation height extraction results.

Download Full Size | PDF

5. Discussion

When processing data from different sources, especially during the step of generating point clouds, the extracted vegetation information from a single data source may have limitations to varying extents. Laser point clouds, based on elevation information, may have terrain fluctuations similar to grassland elevation, causing interference in extracting ground and grassland information. Image-based point clouds can only capture surface textures of the canopy, lacking understory terrain information. Results obtained through visible light vegetation indices are limited by spectral reflectance, powerless in conditions of no reflection and oversaturation, and challenging to segment deciduous trees, shrubs, and grasslands with similar spectra. However, their respective strengths can complement each other under certain conditions. Laser point clouds, based on penetration characteristics, can capture some understory information, while image-based point clouds can preserve RGB colors and textures, enabling precise object classification through RGB visible light bands.

In this study, LiDAR point clouds and image-based point clouds were registered using the ICP algorithm to minimize errors between the two datasets, ensuring the fusion accuracy of the newly constructed CHM + DEVI and CHM + NGBDI images. Training samples were constructed by fusing images and employing the CART algorithm to adaptively segment burned and unburned areas, further categorizing vegetation into deciduous trees, shrubs, and grasslands. The method's advantage lies in its ability to construct different training samples based on the distribution of terrain and objects in different experimental areas, resulting in adaptively changing decision tree models that better fit the terrain and object conditions of the corresponding forest areas. With a small number of representative samples, accurate segmentation results can be obtained. The overall approach is akin to a simplified deep learning algorithm, differing in that it constrains a multitude of parameters affecting extraction results into two categories: elevation and visible light indices, making it easier to segment and applicable to various terrains.

Next, improvements were made to the Watershed algorithm based on the segmented deciduous trees. In the Watershed algorithm, inverted watersheds are affected by water ponds formed by surrounding vegetation during construction. Therefore, the extraction of deciduous tree information may be disturbed by shrubs, and shrubs may be influenced by lower vegetation layers in the vertical structure. Experimental results also indicate that the improvement effect of the algorithm becomes more pronounced in sample areas with greater vegetation complexity. This study did not experimentally investigate the segmentation accuracy of the shrub layer and vegetation height, and the overall improvement pattern of the algorithm in the vertical vegetation structure. This will be the focus of future research.

The fusion of images inherits the advantages from two different data sources while being influenced by the accuracy limitations of the original model. Particularly concerning the crown height model, when the ground slope angle is smaller than the tree crown angle, the top position of a conical crown remains unchanged. However, when the ground slope angle exceeds the crown angle, the top point shifts, introducing tree height errors. Additionally, spherical and elliptical crowns may experience top displacement on relatively small terrain slopes. After algorithmic segmentation, precise single-tree outlines are obtained, ensuring that even with horizontal offsets in CHM, the accurate tree top remains within the outline. To enhance the accuracy of tree height extraction, the original LiDAR point cloud is divided into blocks based on single-tree outlines. The highest point within each point cloud block is indexed as the tree top point, providing the most accurate vegetation height information and mitigating the accuracy errors introduced by CHM.

While this study has achieved some improvements, the analysis has been limited to assessing the algorithm's effectiveness in single-tree extraction for deciduous trees under different levels of vegetation complexity. Whether from a pattern or applicability perspective, the analysis remains relatively narrow. Therefore, we propose the following suggestions for future work:

  • 1. Although airborne LiDAR data offers higher precision compared to airborne imagery and can penetrate vegetation to capture some information beneath the tree canopy, it still faces certain limitations in acquiring the vertical structure of a forest. The top-down laser approach is unable to accurately capture the understorey's vertical layering, and mutual occlusion between tree crowns may result in the loss of lower crown point cloud data [37]. This limitation hinders the comprehensive and authentic acquisition of the overall forest structure. In our future endeavors, we plan to overcome these challenges by integrating UAVs and terrestrial laser scanning (TLS) [3840]. The UAV-mounted LiDAR will conduct a top-down circular scan, providing precise measurements of tree height, while the ground-based LiDAR will perform a bottom-up circular scan, yielding accurate measurements of tree diameter at breast height. The synergy of aerial and ground-based approaches aims to achieve comprehensive forest data, addressing the limitations posed by each method individually.
  • 2. Forest fire hazard warning is a major focus in current research on forest vegetation [41]. In the experiments conducted in this study, the visible light index partially extracted the extent of the burned area. However, the fused image still successfully segmented the vertical structure of vegetation within the burned area, demonstrating a promising practical application. Subsequent research should delve deeper into the application of forest fire hazard warnings [42,43]. Rather than merely analyzing the impact of wildfire encroachment on vegetation, it is essential to establish more research areas within burned zones. Through comparative experiments, regularities can be identified and used to enhance the universality of the research method employed in this study.
  • 3. Constrained by the limited nature of the sample data, a more detailed segmentation effect of different vegetation was not summarized, and this becomes a focal point for future research [44]. For instance, distinguishing crown width differences at the same height could be valuable, especially in artificially planted areas with similar tree structures, involving a mix of two or more tree species. Different tree species exhibit variations in growth changes during the same period. Parameters based on these specific attributes can be established to segment different tree species [45,46]. This segmentation approach could also be applied to analyze the condition of the same vegetation during growth, considering factors such as pest infestations that inevitably result in delayed growth compared to unaffected trees during the same period. By expanding the application value in agriculture, commerce, and other domains, the practicality of this research method can be enhanced.

6. Conclusions

This paper introduces a novel approach by integrating LiDAR point clouds and visible light vegetation indices to create CHM + DEVI/NGBDI fusion image that incorporates both color information and spatial structure. This fusion image is employed for the adaptive extraction of the vertical vegetation structure within the study area. Furthermore, improvements to the Watershed algorithm are implemented specifically for the arboreal regions. The research outcomes indicate:

  • 1. Training samples were constructed by fusing images, and an adaptive segmentation of the burned and unburned areas was carried out using the CART algorithm. Vegetation was further categorized into trees, shrubs, and grassland. The visible light vegetation index was employed to distinguish between grassland and ground areas. Leveraging the three-dimensional spatial information from LiDAR point clouds allowed for the separation of vertical vegetation structures. Simultaneously, the sensitivity of certain visible light indices in the wildfire-affected areas effectively isolated the burned areas. Additionally, a more in-depth analysis was conducted on the vegetation distribution in different locations within the burned areas.
  • 2. The improved watershed segmentation algorithm demonstrates a higher extraction accuracy compared to the original algorithm. Morphological reconstruction and fusion of images are employed to eliminate noise and irrational values. The marking range is constrained within the arborescent area, enhancing the precision of individual tree extraction. Additionally, the offset tree vertices are restored through the original point cloud, overcoming the accuracy deficiencies of the CHM. The segmentation accuracy of four single-tree segmentation algorithms was compared across six sample plots, and the enhanced algorithm's performance was validated using ground-truth data. Overall, the recall rate (R) increased by 4.1%, precision rate (P) increased by 3.7%, F1 score increased by 3.9%, and the average extracted tree height accuracy improved by 3.42%. The results indicate that the extraction performance of the improved algorithm is more effective in areas with higher vegetation complexity.

Compared to the limited vegetation information obtained from individual datasets, this method effectively integrates the advantages of data from different sources, namely laser and visible light. It enables the acquisition of more comprehensive and in-depth forest vegetation information, holding significant importance in enhancing the accuracy of forestry resource surveys.

Funding

National Natural Science Foundation of China (NSFC) – Regional Innovation Development Joint Fund Key Project (U22A20566); Henan University of Science and Technology Special Funding Project for Basic Scientific Research Operating Costs (NSFRF170909); Key Scientific Research Project of Colleges and Universities in Henan Province[China] (18B420003).

Disclosures

The authors declare no conflict of interest.

Data availability

The data presented in this study are available on request from the corresponding author.

References

1. C. Véga, J.-P. Renaud, S. Durrieu, et al., “On the interest of penetration depth, canopy area and volume metrics to improve Lidar-based models of forest parameters,” Remote Sens. Environ. 175, 32–42 (2016). [CrossRef]  

2. S. Jay, F. Baret, D. Dutartre, et al., “Exploiting the centimeter resolution of UAV multispectral imagery to improve remote-sensing estimates of canopy structure and biochemistry in sugar beet crops,” Remote Sens. Environ. 231, 110898 (2019). [CrossRef]  

3. S. Erasmi, M. Klinge, C. Dulamsuren, et al., “Modelling the productivity of Siberian larch forests from Landsat NDVI time series in fragmented forest stands of the Mongolian forest-steppe,” Environ. Monit. Assess. 193(4), 200 (2021). [CrossRef]  

4. S. A. Hall, I. C. Burke, D. O. Box, et al., “Estimating stand structure using discrete-return lidar: an example from low density, fire prone ponderosa pine forests,” For. Ecol. Manag. 208(1-3), 189–209 (2005). [CrossRef]  

5. W. Yan, H. Guan, L. Cao, et al., “An automated hierarchical approach for three-dimensional segmentation of single trees using UAV LiDAR data,” Remote Sens. 10(12), 1999 (2018). [CrossRef]  

6. P. Wang, Y. Tang, Z. Liao, et al., “Road-side individual tree segmentation from urban MLS point clouds using metric learning,” Remote Sens. 15(8), 1992 (2023). [CrossRef]  

7. Q. Wang, M. Wang, Q. Wang, et al., “Extraction of vegetation information from visible unmanned aerial vehicle images,” Trans. Chin. Soc. Agric. Eng. 31(5), 152–159 (2015). [CrossRef]  

8. T. Zhou, Q. Hu, Z. Han, et al., “Green vegetation extraction based on visible light image of UAV,” China Environ. Sci. (Chin. Ed.) 41(5), 2380–2390 (2021). [CrossRef]  

9. X. Shen, L. Cao, B. Yang, et al., “Estimation of forest structural attributes using spectral indices and point clouds from UAS-based multispectral and RGB imageries,” Remote Sens. 11(7), 800 (2019). [CrossRef]  

10. S. Yu, X. Chen, X. Huang, et al., “Research on the estimation of Chinese Fir stand volume based on UAV-LiDAR technology,” Forests 14(6), 1252 (2023). [CrossRef]  

11. W. Yan, H. Guan, L. Cao, et al., “A self-adaptive mean shift tree-segmentation method using UAV LiDAR data,” Remote Sens. 12(3), 515 (2020). [CrossRef]  

12. S. Nie, C. Wang, X. Xi, et al., “Assessing the Impacts of Various Factors on Treetop Detection Using LiDAR-Derived Canopy Height Models,” IEEE Trans. Geosci. Remote Sens. 57(12), 10099–10115 (2019). [CrossRef]  

13. X. Fu, Z. Zhang, L. Cao, et al., “Assessment of approaches for monitoring forest structure dynamics using bi-temporal digital aerial photogrammetry point clouds,” Remote Sens. Environ. 255, 112300 (2021). [CrossRef]  

14. L. Chang, Z. Zhang, Y. Li, et al., “Multilevel Extraction of Vegetation Type Based on Airborne LiDAR Data,” Can. J. Remote Sens. 46(6), 681–694 (2020). [CrossRef]  

15. W. Luo, S. Gan, X. Yuan, et al., “Test and Analysis of Vegetation Coverage in Open-Pit Phosphate Mining Area around Dianchi Lake Using UAV–VDVI,” Sensors 22(17), 6388 (2022). [CrossRef]  

16. X. Lu, W. Li, J. Xiao, et al., “Inversion of Leaf Area Index in Citrus Trees Based on Multi-Modal Data Fusion from UAV Platform,” Remote Sens. 15(14), 3523 (2023). [CrossRef]  

17. Y. Xu, D. Fu, H. Yu, et al., “High-resolution global mature and young oil palm plantation subclass maps for 2020,” Int. J. Digit. Earth 16(1), 2168–2188 (2023). [CrossRef]  

18. F. Meyer and S. Beucher, “Morphological segmentation,” J. Vis. Commun. Image Represent. 1(1), 21–46 (1990). [CrossRef]  

19. T. Ma, J. Zhou, H. Wang, et al., “Research on experimental teaching of watershed image segmentation based on morphological reconstruction,” Exp. Technol. Manag. 38(3), 93–97 (2021).

20. M. Xu, H. Yang, H. Li, et al., “Single tree segmentation in close-planting orchard using UAV digital image,” Geomatics Inf. Sci. Wuhan Univ. 47(11), 1906–1916 (2022).

21. X. Xu, Z. Zhou, Y. Tang, et al., “Individual tree crown detection from high spatial resolution imagery using a revised local maximum filtering,” Remote Sens. Environ. 258, 112397 (2021). [CrossRef]  

22. C. Qian, C. Yao, H. Ma, et al., “Tree species classification using airborne LiDAR data based on individual tree segmentation and shape fitting,” Remote Sens. 15(2), 406 (2023). [CrossRef]  

23. J. Zörner, J. Dymond, J. Shepherd, et al., “LiDAR-based regional inventory of tall trees—Wellington, New Zealand,” Forests 9(11), 702 (2018). [CrossRef]  

24. K. Ma, C. Li, F. Jiang, et al., “Improvement of treetop displacement detection by UAV-LiDAR point cloud normalization: a novel method and a case study,” Drones 7(4), 262 (2023). [CrossRef]  

25. Y. He, B. Liang, J. Yang, et al., “An iterative closest points algorithm for registration of 3D laser scanner point clouds with geometric features,” Sensors (Basel) 17(8), 1862 (2017). [CrossRef]  

26. W. Zhang, J. Qi, P. Wan, et al., “An easy-to-use airborne LiDAR data filtering method based on cloth simulation,” Remote Sens. 8(6), 501 (2016). [CrossRef]  

27. J. A. Gamon and J. S. Surfus, “Assessing leaf pigment content and activity with a reflectometer,” New Phytol. 143(1), 105–117 (1999). [CrossRef]  

28. E. R. Hunt, M. Cavigelli, C. S. T. Daughtry, et al., “Evaluation of digital photography from model aircraft for remote sensing of crop biomass and nitrogen status,” Precis. Agric. 6(4), 359–378 (2005). [CrossRef]  

29. G. Bareth, A. Bolten, M. L. Gnyp, et al., “Comparison of uncalibrated rgbvi with spectrometer-based NDVI derived from uav sensing systems on field scale,” ISPRS - Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. XLI-B8, 837–843 (2016). [CrossRef]  

30. D. M. Woebbecke, G. E. Meyer, K. Von Bargen, et al., “Plant species identification, size, and enumeration using machine vision techniques on near-binary images,” in J. A. DeShazerG. E. Meyer, eds. pp. 208–219 (1993).

31. D. M. Woebbecke, G. E. Meyer, K. Von Bargen, et al., “Color indices for weed identification under various soil, residue, and lighting conditions,” Trans. ASAE 38(1), 259–269 (1995). [CrossRef]  

32. R. Sellaro, M. Crepy, S. A. Trupkin, et al., “Cryptochrome as a sensor of the blue/green ratio of natural radiation in arabidopsis,” Plant Physiol. 154(1), 401–409 (2010). [CrossRef]  

33. L. Jing, B. Hu, J. Li, et al., “Automated Delineation of Individual Tree Crowns from Lidar Data by Multi-Scale Analysis and Segmentation,” Photogramm. Eng. Remote Sens. 78(12), 1275–1284 (2012). [CrossRef]  

34. A. O. Ok and A. Ozdarici-Ok, “2-D delineation of individual citrus trees from UAV-based dense photogrammetric surface models,” Int. J. Digit. Earth 11(6), 583–608 (2018). [CrossRef]  

35. E. Ayrey, S. Fraver, J. A. Kershaw, et al., “Layer stacking: a novel algorithm for individual forest tree segmentation from LiDAR point clouds,” Can. J. Remote Sens. 43(1), 16–27 (2017). [CrossRef]  

36. X. Hu and D. Li, “Research on a single-tree point cloud segmentation method based on UAV tilt photography and deep learning algorithm,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13, 4111–4120 (2020). [CrossRef]  

37. Y. Cao, J. G. C. Ball, D. A. Coomes, et al., Tree segmentation in airborne laser scanning data is only accurate for canopy trees Ecology, 2022).

38. M. Dai and G. Li, “Soft segmentation of terrestrial laser scanning point cloud of forests,” Appl. Sci. 13(10), 6228 (2023). [CrossRef]  

39. N. Saarinen, V. Kankare, T. Yrttimaa, et al., “Assessing the effects of thinning on stem growth allocation of individual Scots pine trees,” For. Ecol. Manag. 474, 118344 (2020). [CrossRef]  

40. J. Sun, P. Wang, Z. Gao, et al., “Wood–leaf classification of tree point cloud based on intensity and geometric information,” Remote Sens. 13(20), 4050 (2021). [CrossRef]  

41. C. Lian, C. Xiao, and Z. Feng, “Spatiotemporal characteristics and regional variations of active fires in China since 2001,” Remote Sens. 15(1), 54 (2022). [CrossRef]  

42. N. Stork, A. Mainzer, and R. Martin, “Native and non-native plant regrowth in the Santa Monica Mountains National Recreation Area after the 2018 Woolsey Fire,” Ecosphere 14(6), e4567 (2023). [CrossRef]  

43. Y. Qi, N. C. Coops, L. D. Daniels, et al., “Assessing the effects of burn severity on post-fire tree structures using the fused drone and mobile laser scanning point clouds,” Front. Environ. Sci. 10, 949442 (2022). [CrossRef]  

44. B. Chehreh, A. Moutinho, and C. Viegas, “Latest trends on tree classification and segmentation using UAV data—a review of agroforestry applications,” Remote Sens. 15(9), 2263 (2023). [CrossRef]  

45. Y. Quan, M. Li, Y. Hao, et al., “Tree species classification in a typical natural secondary forest using UAV-borne LiDAR and hyperspectral data,” GIScience Remote Sens. 60(1), 2171706 (2023). [CrossRef]  

46. B. Wang, J. Liu, J. Li, et al., “UAV LiDAR and hyperspectral data synergy for tree species classification in the Maoershan forest farm region,” Remote Sens. 15(4), 1000 (2023). [CrossRef]  

Data availability

The data presented in this study are available on request from the corresponding author.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1.
Fig. 1. Overview of the study area. (a) Geographic location of the study area; (b) RGB point cloud; (c) LiDAR point cloud.
Fig. 2.
Fig. 2. Photo level showing extent of vegetation. (a) Sample 1 is located at the boundary of the burned areas; (b) Sample 2 is located inside the burned areas; (c) Sample 3 is located outside the burned areas.
Fig. 3.
Fig. 3. SLAM elevation displays vegetation hierarchy. (a) Sample 4: high degree of mixed vegetation. (b) Sample 5: medium mixed vegetation; (c) Sample 6: low degree of mixed vegetation.
Fig. 4.
Fig. 4. Overall flow chart.
Fig. 5.
Fig. 5. Visible light index. Vegetation discrimination: (a) DEVI; (b) RGRI; (c) VDVI; (d) NGRDI. Burned areas discrimination: (e) NGBDI; (f) GBRI.
Fig. 6.
Fig. 6. Visible light index bimodal threshold method.
Fig. 7.
Fig. 7. Visualisation of visible light index accuracy. Vegetation discrimination: (a) DEVI; (b) RGRI; (c) VDVI; (d) NGRDI. Burned areas discrimination: (e) NGBDI; (f) GBRI.
Fig. 8.
Fig. 8. Fusion effect comparison chart. (a) Range of arborvitae shown on high-resolution photographs; (b) Comparing correct and incorrect tree ranges on NGBDI; (c) Range of arborvitae shown on CHM; (d) Range of arborvitae shown on CHM + NGBDI.
Fig. 9.
Fig. 9. Fusion effect comparison chart. (a) Correct grass range shown on DEVI; (b) Comparison of correct and incorrect grass ranges on CHM; (c) Correct grass range shown on CHM + DEVI; (d) Unable to show correct arbor range on DEVI; (e) Range of arborvitae shown on CHM; (f) Range of arborvitae shown on CHM + DEVI.
Fig. 10.
Fig. 10. Improvement process for the marker control watershed algorithm.
Fig. 11.
Fig. 11. Schematic diagram of the relationship between slope angle and canopy angle. (a) Accurate tree apex position when canopy angle is greater than slope angle; (b) Tree apex position in CHM when canopy angle is greater than slope angle; (c) Accurate tree apex location when canopy angle is less than slope angle; (d) Tree apex position in CHM when canopy angle is less than slope angle.
Fig. 12.
Fig. 12. Horizontal offset of tree vertices in the CHM. (a) No offset (correct tree vertex position); (b) small amount of bias; (c) Maximum offset.
Fig. 13.
Fig. 13. Raw laser point cloud correction CHM error offset process. (a) Extracted single wood profile range; (b) Extracted mono-wood point cloud blocks; (c) Corrected tree vertices.
Fig. 14.
Fig. 14. Classification results. (a) CHM + NGBDI; (b) Extraction results for burned areas and unburned areas; (c) CHM + DEVI; (d) Trees, shrubs, grasses and ground extraction results.
Fig. 15.
Fig. 15. Split point cloud block overlay display. (a) Sample 1; (b) Sample 2; (c) Sample 3; (d) Sample 4; (e) Sample 5; (f) Sample 6.
Fig. 16.
Fig. 16. Comparison of multiple single tree segmentation algorithms.
Fig. 17.
Fig. 17. Comparative accuracy evaluation of vegetation height extraction results.

Tables (4)

Tables Icon

Table 1. Visible light index

Tables Icon

Table 2. Accuracy report of visible light index

Tables Icon

Table 3. Accuracy evaluation of single tree segmentation in MCWS algorithm

Tables Icon

Table 4. Accuracy evaluation of single tree segmentation based on improved algorithm

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

G i n i ( S ) = 1 i = 1 n p i 2 , p i = | C i | S
G i n i ( S | H ) = | S 1 | | S | G i n i ( S 1 ) + | S 2 | | S | G i n i ( S 2 )
Δ G i n i ( H ) = G i n i ( S ) G i n i ( S | H )
{ D G 1 ( P ) = ( P B ) G E G 1 ( P ) = ( P Θ B ) G
{ D G n ( P ) = D G 1 ( D G n 1 ( P ) ) E G n ( P ) = E G 1 ( E G n 1 ( P ) )
{ R G D ( P ) = D G k ( P ) R G E ( P ) = E G k ( P )
{ O R n ( P ) = R P D [ P Θ m B ] C R n ( P ) = R P E [ P m B ]
tan ( θ ) = H L R
H ( x ) = H + x ( ( H L R ) tan ( λ ) )
H ( x ) = H + x ( tan ( θ ) tan ( λ ) ) w i t h i n R x 0
{ x max = 0 H max = H Δ H = H max H = 0 i f   θ λ
{ x max = R H max = H R ( tan ( θ ) tan ( λ ) ) Δ H = H max H = R ( tan ( λ ) tan ( θ ) ) i f   θ λ
R = T P T P + F N × 100 %
P = T P T P + F P × 100 %
F 1 = 2 R P R + P × 100 %
Δ H = ( 1 | H h | H ) × 100 %
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.