Content area

Abstract

With the development of three-dimensional laser scanning technology, a large number of point cloud data generated has caused great computational pressure on storage, processing, and visualization. To this end, this paper proposes an edge-optimized voxel grid down-sampling method based on two regions, which aims to reduce the amount of data while preserving key geometric accuracy and details. By defining the two regions of point cloud data, this method proposes a two-region point cloud down-sampling model according to the different point cloud deviation characteristics of the two regions, so as to simplify the point cloud data volume and accurately identify and retain the edge contour feature points. The experimental results show that the proposed method performs well under both low-precision and high-precision conditions. It can maintain the geometric features and surface area of the point cloud while simplifying the point cloud. Compared with other methods, it has outstanding performance in the top surface contour deviation index and surface area change rate. It has a high accuracy retention ability and good simplification effect and is suitable for a variety of application scenarios.

Full text

Turn on search term navigation

1. Introduction

With the development of scanning technology and the rapid increase in point cloud data generation speed, three-dimensional laser scanning technology shows a series of technical characteristics that are difficult to reach compared with traditional point cloud measurement methods. It shows significant advantages in measurement accuracy, efficiency, and comprehensiveness of data acquisition. However, the generation of a large number of point cloud data has also led to the rapid accumulation of redundant data. These huge data sets cause great computational pressure on the computer system in terms of storage, processing, and visualization, consume a lot of computing resources, and limit the efficiency of in-depth analysis and application. Therefore, the point cloud lightweight processing technology came into being, aiming to reduce the complexity by reducing data points or adjusting the structure, so as to improve the efficiency of processing and visualization. However, this process is accompanied by the risk of information loss, in particular the details of the appearance of complex buildings may be damaged, affecting the accuracy of subsequent analysis [1]. Therefore, the current research focuses on optimizing lightweight processing and strives to retain key geometric accuracy and details while reducing the amount of point cloud data to ensure the comprehensiveness and accuracy of the analysis. At present, the research on point cloud simplification technology at home and abroad mainly focuses on improving simplification efficiency, reducing data volume, and retaining key geometric features. The research methods can be roughly divided into grid division method, clustering method, geometric feature method, deep learning method, and hybrid method.

Based on the method of grid division, Daniels [2] divided the point cloud into quadrilaterals and extended the quadratic error measurement algorithm to the quadrilateral to realize the simplification of the point cloud. Zhao [3] proposed a point cloud simplification strategy based on uniform grid division. By constructing a uniform grid system to approximate the original point cloud, the complex curvature estimation steps are avoided, thus simplifying the calculation process. Dong Y [4] used the optimized voxel grid method combined with the lightweight algorithm of edge extraction for the laser point cloud of the highway viaduct segment beam, which performed well in retaining the geometric contour and reducing the calculation time. Xiao [5] proposed an improved voxel grid down-sampling method. By calculating the center of gravity of the point cloud in each voxel as the sampling point, the problem of uneven point cloud distribution in traditional methods is solved, and the efficiency is higher.

Based on the clustering method, Liu [6] used the K-means clustering method to construct an octree to cluster the point cloud, calculate the Euclidean distance from each point to the clustering center, and retain the points whose distance meets the threshold. However, the algorithm is easily prone to local optimum and eliminating feature points. Wang [7] used the multi-parameter K-means clustering method, used the octree clustering method to distinguish multiple modules, and used different methods to simplify the point cloud for different modules. The algorithm effectively retains some feature points. Shi Baoquan [8] proposed a point cloud simplification algorithm based on clustering. The algorithm divides the point cloud into grids, selects representative points, clusters and subdivides each type according to the change in normal vector. Finally, the mean shift is performed to generate modal points, which can maintain the point cloud features.

Based on the geometric feature method, Li [9] proposed a scattered point cloud simplification method based on curvature Poisson disk sampling. By dividing point clouds into flat and feature regions and combining uniform sampling with Poisson disk sampling, efficient point cloud data simplification is achieved. Zhu Jingyi [10] proposed a point cloud processing method based on curvature feature classification and down-sampling. By calculating curvature and classifying down-sampling, the detail information is effectively preserved, and the registration accuracy of the three-dimensional reconstruction is improved. Chen Zhangwen [11] used fuzzy entropy iteration in point cloud simplification to retain more detailed features of the point cloud data model, so as to better maintain the integrity of the point cloud data boundary. However, it is necessary to calculate the curvature of all data points, and the calculation amount is relatively large. Fu [12] proposed a new point cloud simplification method based on grid dynamic division. This method first rasterizes the point cloud space and uses random sampling technology to eliminate redundant points. Then, the Gaussian function is introduced to evaluate and retain the edge feature points, which avoids the generation of holes in the simplification process and improves the visual quality of the simplified model.

The method based on deep learning uses a deep learning model to realize point cloud down-sampling. Hookyung [13] proposed a transformer-based point cloud sampling network, TransNet, which achieves task-oriented point cloud down-sampling by introducing a self-attention mechanism. Hui L. [14] proposed a method based on the PointNetVLAD network. By removing the ground plane and down-sampling to 4096 points, the data down-sampling is completed, the calculation amount is reduced, and the data quality is optimized. These methods are innovative in feature extraction and simplification efficiency but require a lot of training data and computing resources.

Based on the hybrid method, Yao [15] constructed triangulation by combining the quadratic error measurement algorithm with the discrete curvature characteristics of three-dimensional point cloud data. This simplified method can accurately preserve the geometric structure of the original point cloud. Lyu [16] proposed a three-dimensional point cloud map down-sampling algorithm combining dynamic voxel filtering and edge extraction, which effectively balanced the down-sampling efficiency and feature retention, and verified its superiority by comparing various down-sampling methods. Dehghanpour [17] proposed a three-dimensional point cloud map down-sampling algorithm combining dynamic voxel filtering and edge extraction, which effectively reduced feature loss, and verified its superiority by comparing various down-sampling methods. Li [18] combined the K-neighborhood three-dimensional voxel grid and bounding box method. By calculating the K-neighborhood distance and normal information of the point cloud, the grid was constructed and its center of gravity was used as a representative point to achieve uniform simplification of the point cloud. Cheng [19] proposed a point cloud adaptive simplification method based on hybrid subdivision. By estimating the point cloud density to simplify the data, the result is to homogenize the point cloud distribution on the premise of retaining the geometric characteristics of the point cloud. Tian [20] combined the normal angle, curvature threshold, voxel simplification, and other methods to simplify the point cloud. However, this method only extracts the local feature points according to the single condition of curvature as the feature factor and threshold, and the extraction error is large.

In summary, the main focus of current research at home and abroad is to improve the simplification efficiency of point cloud data and to reduce processing time and storage requirements by continuously innovating algorithms [21]. However, while pursuing efficiency improvement, there are relatively few studies on the optimization of point cloud edge contour deviation, which limits the ability of the simplified point cloud model to accurately express the original shape to a certain extent. Therefore, based on the voxel grid method, this paper proposes an edge-optimized voxel grid algorithm based on two regions. By constructing a two-region partition and two-scale voxel configuration, this method effectively solves the problem of disconnection between edge processing and the down-sampling process in existing methods. The point cloud is innovatively divided into edge regions and non-edge regions. In the edge region, small-scale voxels are used and combined with normal vector constraints to retain feature points. In the non-edge region, large-scale voxels are used to achieve efficient simplification. The two-scale strategy significantly improves the simplification efficiency while ensuring the edge accuracy and provides a reliable solution for segment beam point cloud processing.

2. Double-Region-Based Edge-Optimized Voxel Grid Down-Sampling

2.1. Algorithm Principle of Voxel Grid Method

According to the point cloud data, a three-dimensional voxel grid is created, and then the side length, L, of the small cube grid that needs to be divided is calculated. According to the size of L, the three-dimensional voxel grid is decomposed into small grids. After the grid is divided, the point cloud data is placed in the corresponding small grid. At the same time, those small grids that do not contain data points are deleted, and the center of gravity of the point cloud in the grid is calculated. The center of gravity is used to replace the remaining point clouds in the entire grid to achieve the purpose of compressing the number of point clouds. This method is simple, efficient, and easy to implement. It does not need to establish a complex topology. The number of point clouds is simplified as a whole, and the simplified point cloud is realized. The principle of the voxel grid method is shown in Figure 1. The point cloud space is divided into uniform three-dimensional voxel grids, the center of gravity of all points in each grid is calculated, and the center of gravity is used to replace all points in the grid to achieve down-sampling.

The point cloud centroid (X, Y, Z) is calculated as follows:

(1)X=i=1nXin,Y=i=1nYin,Z=i=1nZin

In Formula (1), n denotes the number of points contained in a grid or voxel.

2.2. Algorithm Optimization

In the field of point cloud processing, the voxel grid method is an effective data down-sampling technique. Although it can effectively reduce the number of data points while maintaining high top point cloud integrity, it has limitations in processing edge features by replacing all points with the center of gravity points in the grid. As the key element of shape description, the edge point is replaced by the center of gravity point of the adjacent grid, which may lead to the significant loss of edge information and then lead to the reduction in the top surface geometry size, which is particularly unfavorable for the application scenarios that require accurate geometric information. The use of a small-size voxel grid down-sampling method can reduce the ambiguity of the edge of the down-sampling to a certain extent, but it will lead to a large amount of data and cause great difficulties for subsequent point cloud operations. In view of the significant influence of the change of the three-dimensional point cloud size on the subsequent processing results, this paper proposes an improved voxel grid down-sampling method. This method aims to introduce additional normal vector constraints from the perspective of two regions to optimize the edge processing, so as to ensure that the lightweight point cloud data maintains an efficient compression ratio while its edge deviation is controlled within an acceptable range.

Voxel algorithm based on normal vector change

Although the voxel grid method can efficiently maintain the integrity of point cloud data, when dealing with the top edge, its simplification method of replacing the inner point of the grid with the barycenter point can easily cause the edge data to be missing. In order to overcome this defect, this paper studies an improved voxel grid down-sampling method, which preserves the integrity of the data structure of the top edge contour in the lightweight process. By setting constraints to limit edge deviation, the problem of edge loss in traditional methods is effectively solved, and the accuracy and integrity of the edge of the point cloud after lightweight methods are ensured. It is suitable for application scenarios such as the segment beam laser point cloud that require accurate edges.

The specific principle is as follows:

(1). Dividing voxels

Let the point cloud data be

P={piR3i=1,,N},

where pi is the three-dimensional coordinates of the i th point, the normal vector of the point cloud is M={miR3i=1,,N}, where li is the normal vector of the i th point, and the voxel index of each point is vi=piv, where v is the voxel size.

(2). Voxel processing

① Calculate the mean value of the point cloud normal vector in each voxel:

(2)m¯=1Nvi=1Nvmi

In the formula Nv is the number of point clouds in a single voxel; m is the mean of point cloud normal vector.

② Calculate the difference between each normal vector and the mean:

(3)mj=mjm¯

In the formula, mj is the difference between the normal vector of a point and the mean value of the normal vector; mj is a point normal vector.

③ Calculate the variance of the difference:

(4)Varj=mj2

④ Save the points with the largest variance of each voxel grid normal vector and the centroid points.

This method retains two key points in each voxel: one is the centroid point representing the overall position; the second is the point with the largest variance of the normal vector, that is, the ‘constraint point’ that best reflects the local geometric mutation. This two-point retention strategy is shown in Figure 2.

2.. Edge optimization voxel grid algorithm based on double region

When processing the point cloud data of the segmental beam, a sub-regional and sub-scale processing strategy is adopted to optimize the processing effect and improve the efficiency. Firstly, in the set small-scale algorithm processing area, the small-scale voxel algorithm based on normal vector change is used to finely process this area to capture its subtle geometric changes. Subsequently, the small-scale algorithm processing area is removed from the original point cloud data, and the remaining point cloud data without edges is set as a large-scale algorithm processing area. In this area, the voxel grid algorithm with larger grid size is used for processing. By increasing the voxel size, the computational complexity is effectively reduced, and the key features of the overall shape are retained. Through the above strategies, the structural integrity of the point cloud data can be greatly maintained, and the amount of point cloud data can be effectively reduced.

The dual-region algorithm flow is as follows:

(1). Let the original point cloud data of segment beam be U=u1,u2,...un, where ui is a point in the original point cloud. The edge detection method based on a normal vector gradient is used to extract the edge contour point set E=e1,e2,...ek. The algorithm calculates the normal vector by principal component analysis of the local neighborhood and then identifies the edge points whose normal vector changes exceed the defined threshold. By evaluating the grid search method with multiple parameter combinations, the detection threshold α = 0.15 and the neighborhood size k = 30 are selected.

(2). The edge contour point set E is removed from the original point cloud U, and the remaining point set U=UE is obtained. The processing area of the large-scale algorithm is defined as Rlarge=U

(3). The two-region algorithm is introduced. For the small-scale algorithm to process region Rsmall, a small-size voxel algorithm based on normal vector change is used. The voxel size is δsmall, and the normal vector change threshold is θ. For the large-scale algorithm processing area Rlarge, a voxel grid algorithm with a larger grid size is used, and the voxel size is set to δlarge. The schematic diagram of the algorithm framework is shown in Figure 3.

The results of edge optimization voxel grid edge processing based on two regions are shown in Figure 4. Due to the limited visual resolution of pycharm, CloudCompare v2.14 software is selected to visualize the detailed structure.

3. Optimization Algorithm Verification

In this study, a systematic example verification is carried out for the edge-optimized voxel grid down-sampling method based on two regions. The top surface of the segmental beam usually has a relatively regular geometric shape, such as a plane or a slight arc surface. The laser point cloud has a certain representation of the entire segmental beam in geometric features and is relatively independent and easy to operate. When it is down-sampled, the sampling density can be more accurately controlled. Therefore, the top surface point cloud is used to evaluate the performance of the optimized voxel algorithm. Through the RANSAC algorithm, the top surface of the segment beam point cloud data is accurately extracted. In order to quantitatively evaluate the sampling performance of the proposed method, a multi-dimensional control experiment was designed to systematically compare it with the traditional voxel grid method, random sampling method, curvature sampling method, and geometric feature sampling method. The experiment was carried out from two dimensions of low-precision and high-precision. The Contour Deviation Index (CDI) and Surface Area Variation Rate (SAVR) were used to quantitatively analyze the down-sampling effect.

3.1. Random Sample Consensus Algorithm for Extracting Top Surface

Random Sample Consensus (RANSAC), as a classical iterative estimation technique, is widely used in the field of data fitting and model parameter inference. Its core idea is to iteratively estimate the parameters of a mathematical model by random sampling from a set of data sets mixed with ‘interior points’ and ‘exterior points’. Here, ‘interior points’ are defined as data points that can closely conform to a preset mathematical model, which play a decisive role in the accurate estimation of model parameters. The ‘outer point‘ usually does not conform to the model hypothesis due to noise interference, measurement error, or data anomaly, so it is regarded as an interference term in the process of model parameter estimation.

The specific steps are as follows:

(1). Random selection of samples: A small part of the data is randomly selected from the original data set as samples, and these samples are assumed to all be ‘interior points’.

(2). Model fitting: Use the selected sample data to fit a model.

(3). Interior point detection: For each point in the data set, calculate its distance to the current fitting model (or called residual). If the residual error of a point is less than a preset threshold, it is considered an ‘interior point’.

(4). Evaluation model: Record the number of interior points of the current model and update the best model (the model with the largest number of interior points).

(5). Iteration: Repeat the above steps (random selection of samples, model fitting, interior point detection, evaluation model) until a certain stop condition is reached (such as the maximum number of iterations or the number of interior points is no longer significantly increased).

(6). Result output: Finally, the algorithm will output the best model and its corresponding interior point set.

The extraction results are shown in Figure 5.

3.2. Parameter Tuning

In order to ensure that the small-scale region can capture the finest geometric features, the small-scale voxel is set to the minimum precision unit of the system 0.01 cm; in order to take into account the processing efficiency and avoid the geometric distortion caused by the too-large grid in the plane area, the large-size voxel is set to 0.02 cm. Under this setting, only the parameter tuning of the normal vector change threshold θ is needed. By quickly verifying the 21 different values of the θ parameter, the optimal parameter configuration that can minimize the contour deviation is debugged.

In order to quantitatively evaluate the influence of down-sampling on contour offset, the top contour deviation index CDI is proposed, which is defined as the average value of the Euclidean distance of the contour. The principle is that the point set of the original top surface contour is set to O=o1,o2,...on, where oi is a three-dimensional point xi,yi,zi; let the point set of the top surface contour after down-sampling be P=p1,p2,...pn, where pj is a three-dimensional point xj,yj,zj. For each point pj in P, the KD tree is used to find its nearest point ok in O, and the Euclidean distance dpj,ok between them is calculated, and the obtained Euclidean distance is averaged. The calculation formula of the top surface profile deviation index is shown in Formula (5):

(5)CDI=1mj1mdpj,ok

where dpj,ok is the Euclidean distance between the point and the nearest point, the formula is

(6)dpj,ok=xjxk2+yjyk2+zjzk2

The variation in the profile deviation index (CDI) of the top surface with the θ value is shown in Figure 6. According to the curve of the CDI value, it can be observed that the CDI decreases first and then tends to be stable with the increase of the parameter value. When the value is 0.15, the CDI reaches the minimum value of 0.0026, indicating that the algorithm has the best comprehensive performance under this parameter configuration. Therefore, the optimal θ value is determined to be 0.15.

3.3. Algorithm Comparison

In order to evaluate the sampling effect of this method, the method is compared with the ordinary voxel method, random sampling method, curvature sampling method, and geometric feature sampling method. For the ordinary voxel method, two grid sizes of 0.02 mm and 0.01 mm are set as the accuracy dimension to compare with the method in this paper. For the random sampling method, the simplification rate is set to 3% and 7%, respectively, as the accuracy dimension compared with the method in this paper. In the curvature sampling method, two sets of sampling interval parameters are configured. One group is the feature obvious area interval 5 and the feature non-obvious area interval 10, and the other group is the feature obvious area interval 10 and the feature non-obvious area interval 20. The two accuracy dimensions are compared with the method in this paper. As for the geometric feature sampling method, the number of neighborhood points is set to 5 and 10, respectively, as two precision dimensions compared with the method in this paper. The down-sampling results of each method under different precisions are shown in Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11.

3.4. Result Analysis

In addition to the contour deviation index, this paper also uses the surface area change rate as another key evaluation index to construct a multi-dimensional comprehensive performance evaluation system. The surface area change rate evaluates the performance of the down-sampling algorithm from the perspective of overall shape preservation by calculating the change rate of the surface area of the point cloud model before and after simplification. Combining the two methods can more fully reflect the balance between geometric feature preservation and data simplification of different down-sampling methods. As shown in Table 1 and Table 2, by comparing the CDI and SAVR performance of this method with the ordinary voxel method, random sampling method, curvature sampling method, and geometric feature sampling method under low-precision and high-precision conditions, the advantages and limitations of each method in different application scenarios can be clearly revealed. This multi-index comparative evaluation method provides a scientific basis for selecting the most suitable point cloud sampling strategy in engineering practice.

(1). Surface area change rate

The point cloud processing function f(p) is defined and the Poisson reconstruction algorithm is used to convert the point cloud C into a triangular mesh model M with the depth parameter d and the linear fitting option. By applying f to the original model and the down-sampled model, respectively, the corresponding mesh models Moriginal and Msimplified are obtained. The surface areas Soriginal and Ssimplified of the two models are calculated, and the change rate ρ is defined as

(7)ρ=(SoriginalSsimplified)/Soriginal×100%

It can be seen from Figure 12 that there are significant differences in the mean value of vertex normal deviation under different sampling methods and different precision conditions. The curvature sampling method has the highest mean deviation under low-precision conditions, while the geometric feature sampling method has the lowest mean deviation under high-precision conditions, indicating that it has better performance in high-precision scenarios. In contrast, the mean deviation of the proposed method is low under low-precision and high-precision conditions, showing good stability. The mean deviation of the ordinary voxel down sampling method and the random sampling method is between the two, and the change is relatively small under different accuracy conditions.

It can be seen from Figure 13 that the random sampling method and curvature sampling method have higher change rate under low precision conditions. while the geometric feature sampling method has the lowest change rate under both precision conditions. The change rate of the proposed method and the ordinary voxel down-sampling method is relatively low, showing good stability.

(2). Comprehensive Performance Evaluation

In the point cloud processing of segmental beams, the engineering application needs to take into account the efficiency of data simplification and the fidelity of geometric features. On the one hand, the storage and calculation pressure requirements brought by massive point clouds significantly reduce the amount of data; on the other hand, the accuracy of the edge contour directly affects the reliability of deformation monitoring and BIM modeling. Therefore, this paper constructs a multi-index weighted evaluation model with edge optimization as the core orientation. Since the edge geometric deviation (CDI) has a decisive influence on the structural analysis, it is given a weight of 50%. The data reduction rate (R) is a direct reflection of the lightweight efficiency, and the weight is set to 30%, which not only reflects the engineering practicability but also avoids the risk of over-simplification. The surface area change rate (SAVR) is used as an auxiliary verification index for the overall morphological integrity, and the weight is set at 20%. The weight distribution is based on the in-depth analysis of the application scenario of the segment beam point cloud: the CDI weight is higher because the edge accuracy is the key to the segment beam detection, which is similar to the emphasis on the contour deviation in the bridge point cloud processing [22]. Streamlining rate weights are moderate to ensure that the amount of data is reduced without losing key features; the weight of SAVR is low because the change in surface area usually has little effect on the overall shape, but it still needs to be monitored to avoid significant deformation. Based on this target priority, the data in the table is first normalized:

(8)NormX=XminXmaxXminX,  XR,CDI,SAVR

Subsequently, a comprehensive evaluation function is formulated:

(9)S=0.3NormR+0.5NormCDI+0.2NormSAVR

The overall performance of the method is ultimately quantified by the arithmetic mean of high- and low-precision scores, yielding a unified TotalScore that jointly evaluates accuracy, reduction, and shape-preservation. The expression is

(10)TS=Slow+Shigh2

According to Equations (8) and (9), the normalized low-precision and high-precision values of each method are computed, and the integrated evaluation results are presented in Table 3 and Table 4, respectively.

The overall performance is derived via a weighted arithmetic mean, with weights of 0.50 for geometric fidelity, 0.30 for data reduction, and 0.20 for shape preservation. All metrics are first normalized to eliminate unit disparities then aggregated according to the assigned weights; the final score is the average of the low- and high-precision outcomes, where a lower value denotes superior overall quality. Applying Equation (9), the aggregated scores of the five methods are: proposed method 0.0579, high-precision voxel 0.0836, low-precision voxel 0.0836, feature-aware sampling 0.1718, random sampling 0.3235, and curvature sampling 0.6534. The proposed method achieves the lowest score and thus the best comprehensive performance.

A cross-method evaluation on both low- and high-precision data sets confirms the proposed method’s superiority. First, its geometric fidelity is markedly better: the normalized CDI remains 0, which is substantially lower than curvature sampling’s 1.0, ensuring maximal retention of structural details. Second, while voxel methods marginally outperform in shape preservation (SAVR), their CDI normalization is significantly higher, indicating the proposed method strikes a more favorable balance between reduction and fidelity. Third, the proposed method exhibits the smallest score fluctuation across precision levels (0.0287 low-precision vs. 0.0871 high-precision), whereas random and curvature sampling degrade sharply with changing precision. After weighted aggregation, the proposed method attains the minimal total score of 0.0579, evidencing consistent advantages in geometric accuracy, data reduction, and shape preservation. Consequently, it is recommended as the preferred down-sampling solution for point-cloud processing.

4. Conclusions

Through the comprehensive evaluation of five down-sampling methods on two sets of data with low- and high-precision, it can be seen that the proposed method performs best among all competitors. First, its geometric accuracy advantage is obvious: on the most critical CDI, the normalized value is always 0, which is much lower than 1.0 of the curvature sampling method, ensuring the maximum retention of model details. Secondly, although the voxel method is slightly better in shape retention (SAVR), its CDI normalization value is significantly higher than that of the proposed method, indicating that the proposed method achieves a better balance between the simplification rate and shape retention. Furthermore, the method in this paper has the smallest score fluctuation under different accuracy conditions (low accuracy 0.0287, high accuracy 0.0871), while the random sampling method and the curvature sampling method have serious performance degradation with accuracy. This stability stems from the algorithm design of the two-region strategy: in the small-scale region (edge region), the voxel algorithm based on the change in normal vector accurately captures the edge details by retaining the point with the largest variance of the normal vector and avoids the loss of features caused by the decrease in grid size in high-precision; in large-scale regions (non-edge regions), large-scale voxel grids efficiently simplify data and reduce computational complexity. In contrast, the curvature sampling method relies on curvature estimation, which is prone to errors when the point cloud density changes, resulting in large CDI fluctuations. By separating the edge and non-edge regions, the dual-region strategy ensures that the consistency performance can be maintained under different accuracy. After synthesizing the three indicators and weighting them, the total score of this method is only 0.0579, which is the minimum value of all methods. It is fully proved that it has significant and stable comprehensive advantages in geometric accuracy, data simplification, and shape preservation. Therefore, it can be recommended as the first choice for point cloud down-sampling.

However, there are still some limitations in this study. Firstly, the method depends on the accuracy of the edge detection algorithm. In the point cloud with serious noise (such as environmental interference during on-site acquisition), the edge extraction may be incomplete, which affects the down-sampling effect. Secondly, the applicability of the method for the design of segmental beams in other civil engineering structures (such as bridges and tunnels) has not been verified, and these structures may have more complex geometric features. In addition, parameter tuning (such as normal vector change threshold) needs to be performed manually, and adaptive parameter settings can be explored in the future to improve efficiency.

The future work will focus on the following directions: First, optimize the edge detection algorithm, combined with deep learning technology to improve the robustness in noisy environments; the second is to extend the method to other structural types to verify its generalization ability. The third is to develop an automated parameter tuning module to reduce manual intervention; the fourth is to integrate real-time processing functions to meet the rapid analysis requirements of the project site.

Author Contributions

Conceptualization, J.Y.; methodology, Z.H.; software, Z.H.; validation, J.Y.; formal analysis, J.Y.; investigation, M.L.; data curation, J.G.; writing—original draft preparation, Z.H.; writing—review and editing, Z.H.; visualization, Z.H.; supervision, X.J.; project administration, X.J. All authors have read and agreed to the published version of the manuscript.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

Author Jiayan Yang was employed by the company China Harbour Engineering Company Ltd. and author Menghui Li was employed by the company Road&Bridge International Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Figures and Tables

Figure 1 Schematic diagram of ordinary voxel grid method. Reproduced with permission from Yan Dong, buildings; published by MDPI, 2024 [4].

View Image -

Figure 2 Constrained point and centroid point retention diagram.

View Image -

Figure 3 Two-region optimization voxel algorithm framework.

View Image -

Figure 4 Optimized voxel grid method to process the original point cloud data.

View Image -

Figure 5 Top surface of point cloud extracted by RANSAC algorithm. (a) Original point cloud. (b) Optimized voxel grid method.

View Image -

Figure 6 Parameter tuning results.

View Image -

Figure 7 Sampling results of this method.

View Image -

Figure 8 Ordinary voxel sampling results.

View Image -

Figure 9 Sampling results of random sampling method.

View Image -

Figure 10 Sampling results of curvature sampling method.

View Image -

Figure 11 Sampling results of geometric feature sampling method.

View Image -

Figure 12 Comparison of the top surface profile deviation index of different down-sampling methods.

View Image -

Figure 13 Comparison of surface area change rate of different sampling methods.

View Image -

Comparison of top surface profile deviation index and surface area change rate of different down-sampling methods under low-precision.

Down-Sampling Method Number of Points Simplification Rate Top Surface Profile Deviation Index Surface Area Change Rate
Low-precision method in this paper 219,912 3.7442% 0.002767 1.56%
Low-precision voxels 175,967 2.9960% 0.008672 1.58%
Low-precision random sampling method 176,201 3.0000% 0.0062389 15.68%
Low-precision curvature sampling method 301,486 5.1331% 0.1167 21.46%
Low-precision geometric feature sampling method 879,965 14.9822% 0.006381 0.51%

Comparison of top surface profile deviation index and surface area change rate of different down-sampling methods under high-precision.

Down-Sampling Method Number of Points Simplification Rate Top Surface Profile Deviation Index Surface Area Change Rate
High-precision method of this paper 756,984 12.8883% 0.0002789 1.46%
High-precision voxels 644,715 10.9769% 0.005156 0.13%
High-precision random sampling method 411,137 7.0000% 0.003381 17.61%
High-precision curvature sampling method 602,970 10.2661% 0.09729 11.47%
High-precision geometric feature sampling method 1,855,801 31.5967% 0.0009164 0.15%

Normalized values and composite scores under low-precision conditions.

Down-Sampling Method N o r m R N o r m C D I N o r m S A V R Slow
Low-precision method in this paper 0.0624 0.0000 0.0501 0.0287
Low-precision voxels 0.0000 0.0513 0.0511 0.1028
Low-precision random sampling method 0.0001 0.0307 0.7289 0.2475
Low-precision curvature sampling method 0.1800 1.0000 1.0000 0.6914
Low-precision geometric feature sampling method 1.0000 0.0314 0.0000 0.1920

Normalized values and composite scores under high-precision conditions.

Down-Sampling Method N o r m R N o r m C D I N o r m S A V R Shigh
High-precision method of this paper 0.2396 0.0000 0.0761 0.0871
High-precision voxels 0.1616 0.0502 0.0000 0.0643
High-precision random sampling method 0.0000 0.0319 1.0000 0.3994
High-precision curvature sampling method 0.1325 1.0000 0.6517 0.6154
High-precision geometric feature sampling method 1.0000 0.0065 0.0011 0.1516

References

1. Hu, Z.X.; Cao, L.Y.; Pei, D.F.; Mei, Z. Improved Preprocessing and Optimized 3D Reconstruction Algorithm of Adaptive Simplified Point Cloud. Laser Optoelectron. Prog.; 2023; 60, pp. 219-224.

2. Daniels, J.; Silva, C.T.; Shepherd, J.; Cohen, E. Quadrilateral mesh simplification. ACM Trans. Graph.; 2008; 27, pp. 1-9. [DOI: https://dx.doi.org/10.1145/1409060.1409101]

3. Zhao, Y.; Liu, Y.; Song, R.; Zhang, M. A saliency detection based method for 3D surface simplification. Proceedings of the IEEE International Conference on Acoustics; Kyoto, Japan, 25–30 March 2012.

4. Dong, Y.; Yang, H.; Yin, M.; Li, M.; Qu, Y.; Jia, X. Research on Lightweight Method of Segment Beam Point Cloud Based on Edge Detection Optimization. Buildings; 2024; 14, 1221. [DOI: https://dx.doi.org/10.3390/buildings14051221]

5. Xiao, Z.; Gao, J.; Wu, D.; Zhang, L. A Uniform Downsampling Method for Three-Dimensional Point Clouds Based on Voxel Grids. Mech. Des. Manuf.; 2023; 8, pp. 180-184.

6. Meiju, L.; Junrui, Z.; Xifeng, G.; Rui, Z. Application of Improved Point Cloud Streamlining Algorithm in Point Cloud Registration. Proceedings of the 32nd Chinese Control and Decision Conference; Online, 22–24 August 2020.

7. Wang, J.Q.; Fan, Y.G.; Li, G.S.; Yu, D.F. Adaptive Point Cloud Reduction Based on Multi Parameter k-Means Clustering. Laser Optoelectron. Prog.; 2021; 58, pp. 175-183. [DOI: https://dx.doi.org/10.3788/lop202158.0610008]

8. Shi, B.; Liang, J.; Zhang, X.; Shu, W. Research on Point Cloud Simplification with Preserved Features. J. Xi’an Jiaotong Univ.; 2010; 44, 4.

9. Li, Q.Q.; Hua, X.H.; Zhao, B.F.; Tao, W.Y.; Qi, H.W. A method for scattered point cloud simplification based on curvature poisson dish sampling. Bull. Surv. Mapp.; 2020; S1, pp. 176-180.

10. Yang, P.; Meng, J. A point cloud classification downsampling and registration method forcultural relics based on curvature features. Chin. Opt.; 2024; 17, pp. 572-579.

11. Chen, Z.; Da, F. 3D Point Cloud Simplification Algorithm Based on Fuzzy Entropy lteration. Acta Opt. Sin.; 2013; 7, 815001. [DOI: https://dx.doi.org/10.3788/AOS201333.0815001]

12. Fu, S.; Wu, L.; Chen, H. Point Cloud Simplification Method Based on Space Grid Dynamic Partitioning. Acta Opt. Sin.; 2017; 37, 9. [DOI: https://dx.doi.org/10.3788/aos201737.1115007]

13. Lee, H.; Jeon, J.; Hong, S.; Kim, J.; Yoo, J. TransNet: Transformer-Based Point Cloud Sampling Network. Sensors; 2023; 23, 4675. [DOI: https://dx.doi.org/10.3390/s23104675] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37430589]

14. Hui, L.; Cheng, M.; Xie, J.; Yang, J.; Cheng, M.M. Efficient 3D point cloud feature learning for large-scale place recognition. IEEE Trans. Image Process.; 2022; 31, pp. 1258-1270. [DOI: https://dx.doi.org/10.1109/TIP.2021.3136714] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34982682]

15. Yao, L.; Huang, S.; Xu, H.; Li, P. Quadratic Error Metric Mesh Simplification Algorithm Based on Discrete Curvature. Math. Probl. Eng.; 2015; 2015, 428917. [DOI: https://dx.doi.org/10.1155/2015/428917]

16. Lyu, W.; Ke, W.; Sheng, H.; Ma, X.; Zhang, H. Dynamic Downsampling Algorithm for 3D Point Cloud Map Based on Voxel Filtering. Appl. Sci.; 2024; 14, 3160. [DOI: https://dx.doi.org/10.3390/app14083160]

17. Dehghanpour, A.; Sharifi, Z.; Dehyadegari, M. Point cloud downsampling based on the transformer features. Vis. Comput.; 2025; 41, pp. 2629-2638. [DOI: https://dx.doi.org/10.1007/s00371-024-03555-4]

18. Li, R.; Yang, M.; Liu, Y.; Zhang, H. An Uniform Simplification Algorithm for Scattered Point Cloud. Acta Opt. Sin.; 2017; 37, 9. [DOI: https://dx.doi.org/10.3788/AOS201737.0710002]

19. Cheng, Y.; He, X.; Jia, D.; Zhao, L. Weight-based Compression Algorithm of Scattered Point Clouds. Henan Sci.; 2019; 37, pp. 1145-1151.

20. Tian, Y.; Yu, Y.; Xue, S. Research on a 3D point cloud simplification algorithm based on double feature constraints. Ind. Control Comput.; 2021; 34, pp. 80–81+84.

21. Bo, L.; Yang, Y.; Shuo, J. Review of advances in LiDAR detection and 3D imaging. Opto-Electron. Eng.; 2019; 46, pp. 21-33.

22. Li, M.; Xu, H.; Yan, D. Simplified optimization algorithm based on point cloud features of bridge. China Sci.; 2024; 19, pp. 920-927.

© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.