Content area
This study proposes a low-cost indoor positioning system based on a single Light Detection and Ranging (LiDAR) sensor and several fixed reflective reference points. Distances are obtained by trigonometric measurement, and positions are computed by trilateration. In static tests the average error was 7.4 mm. When the target moves at walking speed, small survey errors of the reference points cause the average error to increase to 21.8 mm. Finally, the proposed Reference Point Update Method (RPUM) that continuously corrects reference point coordinates using a moving average of recent residuals reduces the dynamic error from 208.71 mm to 20.34 mm, which is about 90% improvement. The method used in this paper requires no additional hardware and runs in real time.
Full text
1. Introduction
The fifth-generation mobile network (5G) provides high-speed and low-latency communication for various applications, including outdoor and indoor positioning. GPS plays a vital role in outdoor positioning, whereas GPS cannot reach inside buildings. It is crucial for many applications used in building localization. In recent years, with the rapid development of Internet of Things (IoT) [1] technology, Location-Based Services (LBS) have also expanded rapidly [2], both indoors and outdoors, leading to an increasing demand for indoor positioning. Indoor positioning technology is critical for smart buildings, logistics management, indoor navigation, and security monitoring. However, traditional indoor positioning technologies, such as those based on Wi-Fi, Bluetooth, and RFID, are often affected by signal penetration, multipath effects, and variations in signal strength. These challenges result in low positioning accuracy, especially in complex indoor environments, highlighting the need for advancements in this field. In comparison to indoor positioning technologies such as Bluetooth, Wi-Fi, UWB and RFID, Light Detection and Ranging (LiDAR) has garnered significant attention due to its advantages of high accuracy, real-time capabilities, and resistance to interference. LiDAR technology works by emitting laser beams and measuring their return time and reflection angles, allowing it to obtain distance and position information of objects.
With the continuous development of smart cities and automation technologies, the demand for more precise and reliable Indoor Positioning Systems (IPS) has become increasingly urgent [3,4]. The operation of intelligent buildings requires accurate positioning of personnel and equipment to enhance energy efficiency and safety. In logistics management, tracking goods is essential for optimizing transportation and storage processes. Precise indoor navigation significantly improves the user experience in public spaces and large shopping malls. In security monitoring, accurate positioning technology ensures personnel safety and enables rapid emergency responses. The suboptimal performance of traditional technologies in complex indoor environments has driven the research and application of new technologies.
1.1. Time of Flight (TOF) Algorithm
The Time of Flight (TOF) algorithm calculates distance based on the time it takes for a light or sound wave to travel from the emission point to the reflection point. Specifically, the system emits a signal such as a laser or ultrasound that reflects when it encounters a target object. The receiver records the total round-trip time of the signal and calculates the distance to the target object using the known speed of the signal such as the speed of light or sound [5]. Since the speed of light is extremely fast, this method requires an exact timing circuit, typically in the picosecond range, and a laser emission circuit with a highly narrow pulse width, typically in the nanosecond range. These high-precision circuit requirements make the development of TOF systems a complex and challenging process. However, lasers that utilize the TOF principle can typically achieve detection distances of hundreds of meters. The advantages of this algorithm lie in its wide-ranging capability and high accuracy, making it suitable for most applications, such as autonomous driving, 3D imaging, and industrial robotics. This method can quickly generate in-depth environmental data, particularly useful for real-time applications such as autonomous driving and industrial automation.
However, the TOF ranging method also has some limitations. First, interference from ambient light can affect measurement accuracy, especially in bright environments where external light sources may interfere with the receiver’s ability to detect the reflected signal. Additionally, objects with low reflectivity such as black or light-absorbing materials can reduce the reflected signal’s intensity, impacting the measurement’s reliability and accuracy. Furthermore, measurement accuracy decreases as the distance increases due to signal attenuation and noise accumulation during long-distance transmission. In practical applications, TOF technology is often combined with other technologies to enhance stability and accuracy. For instance, data fusion from multiple TOF sensors can improve the system’s resistance to interference and measurement accuracy. Alternatively, combining TOF with technologies such as structured light or phase-based ranging can create a hybrid system, ensuring efficient performance across different environments.
1.2. Trigonometric Measurement Algorithm
The trigonometric measurement algorithm [6] is based on the principles of trigonometry in calculating distances. Typically, the system uses two sensors or cameras positioned at known locations to observe the same target. It calculates the angle difference between the two views, known as parallax. Then, the distance to the target object is determined based on the known baseline distance and the angle difference. This algorithm is also applied in LiDAR technology. In LiDAR applications, such as multi-line LiDAR scanning, each scanning line represents a different view angle, and the distance is calculated using the data from these various angles. The LiDAR system emits laser beams and measures the time delay and angle changes of the reflected beams, constructing an accurate 3D map of the environment.
The advantages of this method include high accuracy and fast response, making it particularly suitable for short-range and high-resolution applications, such as autonomous driving and high-precision industrial inspections.
However, the algorithm has drawbacks, particularly the need for precise calibration. Any minor errors in the positioning and angles of the sensors can affect the accuracy of the distance measurement. Some primary results have been presented in previous works [7]. Additionally, for long-distance measurements, the angle difference becomes minimal, thereby increasing the measurement uncertainty. In complex environments, occlusions can block the view angles, making the measurement more challenging. Therefore, the trigonometric measurement method is often combined with other technologies to enhance measurement stability and accuracy in practical applications.
1.3. Phase Shift Algorithm
The Phase Shift algorithm [8] calculates the distance using the phase difference between a continuous wave signal, such as a sine wave, and its reflected signal. The method works by emitting a signal of known frequency and then receiving its reflection from a target. The system measures the phase difference between the emitted and reflected signals and calculates the distance based on this phase difference and the signal’s wavelength.
One of the key advantages of the phase shift method is its high accuracy in distance measurement, making it especially suitable for short-range applications. It also performs well in multipath reflection environments, where signals may overlap. The ability of the phase shift technique to distinguish between direct and multipath reflected signals contributes to its effectiveness in such environments. Additionally, this method maintains high accuracy even when measuring objects with low reflectivity.
However, the phase shift method also has some limitations. Firstly, it requires high-frequency stability because even slight variations in frequency can lead to inaccuracies in phase measurement, affecting distance calculation accuracy. Secondly, the measurement range is limited by the wavelength of the signal. Since phase measurement involves the periodic changes of the waveform, when the distance exceeds one wavelength, it becomes difficult to distinguish the exact phase difference, leading to ambiguity. This limitation makes the phase shift method more suitable for precise short-range measurements. In contrast, long-range measurements must be combined with other techniques to resolve ambiguity.
1.4. Literature Discussion
In reference [9], the researchers investigated how to integrate these two different positioning technologies to improve the accuracy and response speed of positioning systems. This study serves as a valuable reference for understanding the integrated application of LiDAR technology with other sensor technologies. In reference [10,11,12], a comparative analysis was conducted of the performance of these two technologies in various aspects, evaluating their advantages and disadvantages in indoor positioning applications. This research helps us better understand the role and application scenarios of LiDAR technology in Simultaneous Localization and Mapping (SLAM) and other related applications.
Reference [13] focuses on leveraging the characteristics of LiDAR to effectively integrate varying feature information, thereby improving the accuracy and stability of indoor positioning. This study presents a novel approach and method for utilizing LiDAR technology in indoor positioning. Meanwhile, reference [14] investigates precise measurement and data processing techniques for LiDAR, proposing an efficient positioning solution. This research has practical significance for improving the accuracy and reliability of indoor positioning systems.
Additionally, recent studies have focused on SLAM frameworks based on 5G New Radio (NR5G) integrating wireless communication technology into SLAM systems [15]. This integration unlocks new possibilities for robot perception and communication in various scenarios, advancing the field of autonomous navigation and providing fresh insights for the design and application of future intelligent robotic systems. Such a technological foundation will significantly enhance robots’ ability to perform tasks in complex environments.
Finally, TOF cameras have garnered widespread attention in recent years [16,17,18]. Based on the working principle of the Time of Flight, this technology brings new possibilities for high-precision depth information in perception systems. By measuring the time required for a light pulse to travel from emission to reception, TOF cameras can capture high-accuracy depth information. This makes TOF cameras a powerful perception tool, particularly excelling in applications that require detailed three-dimensional environmental awareness. Moreover, combining TOF cameras with LiDAR technology can further enhance the reliability and stability of perception systems.
2. Literature Review of Positioning Method
Standard indoor positioning technologies include Bluetooth, Wi-Fi, RFID and so on. Table 1 illustrates that these technologies can be selected based on the specific requirements of the application. For instance, Bluetooth technology can be chosen when low power consumption and widespread adoption are essential for indoor positioning. Alternatively, Wi-Fi technology may be preferred when low cost and broad applicability are priorities.
Compared to the technologies above, LiDAR technology has garnered significant attention in indoor positioning due to its advantages of high accuracy, real-time capabilities, and resistance to interference. LiDAR technology works by emitting laser beams and measuring their return time and reflection angles, allowing it to obtain distance and position information of objects. Its three-dimensional scanning capability and millimeter-level ranging accuracy suitably meet high-precision positioning requirements in various indoor environments. Standard distance measurement methods used in LiDAR include the Time of Flight (TOF) algorithm, the Trigonometric Measurement algorithm, and the Phase Shift algorithm, with the TOF method being the most widely used, followed by the Trigonometric Measurement algorithm.
2.1. Light Detection and Ranging
LiDAR is a technology that measures the distance and position of objects by emitting laser beams. A LiDAR system typically consists of a laser emitter, a receiver, a scanner, and a control unit. The laser emitter sends out short laser pulses; the receiver detects the reflected laser; the scanner rotates or oscillates the laser beam to cover the target area; and the control unit handles data processing and analysis. The basic principles include the TOF and Trigonometric Measurement algorithms. TOF measures the time it takes for a laser pulse to reach an object and return, calculating the distance based on this time. The Trigonometric Measurement principle, on the other hand, is based on trigonometry. The system typically uses two laser emitters or receivers positioned at known locations to observe the same target from different angles. By measuring the difference in angles, known as parallax, and using the known baseline distance, the distance to the target object can be calculated.
The advantages of LiDAR include high accuracy, high speed, high resolution, strong resistance to interference, and 3D perception capabilities. First, LiDAR can provide centimeter-level or even millimeter-level ranging accuracy, making it suitable for high-precision applications such as terrain mapping, urban modeling, and autonomous driving. Second, LiDAR can quickly scan large areas and generate high-resolution 3D point cloud data, which is crucial for real-time applications such as autonomous driving. Additionally, LiDAR does not rely on external lighting conditions, allowing it to operate consistently in both day and night, or bright and dim environments, without being affected by radio signal interference. Ultimately, LiDAR can produce precise 3D environmental models, making it suitable for various applications that require spatial awareness and object recognition, such as robotic navigation and building surveying.
However, LiDAR technology also has its drawbacks, including high cost, environmental limitations, large data processing requirements, and the need for a direct line of sight. The hardware components of a LiDAR system such as the laser emitter, scanner, and high-precision receiver are expensive and require precise calibration and maintenance. The laser beam may scatter or attenuate in adverse weather conditions, such as fog, rain, or snow, which can affect measurement accuracy and the effective range. Additionally, the 3D point cloud data generated by LiDAR is significant, requiring powerful data processing and storage capabilities, which increases system complexity and cost. Finally, LiDAR requires a direct line of sight to detect objects, which can result in blind spots in environments with numerous obstacles, such as dense forests or urban areas.
Overall, LiDAR technology [19,20,21], with its high accuracy and 3D perception capabilities, has become a vital tool in various fields, particularly in autonomous driving, terrain mapping, and industrial automation. However, its high cost and environmental limitations must be considered when selecting a suitable solution for practical applications.
Recent discussions on LiDAR technology have focused on LiDAR-based IPS [22,23,24,25], self-calibration of reference points [24,25,26], and online map correction [26,27,28]. The future development of LiDAR technology will focus on reducing costs, improving resistance to interference, and enhancing data processing techniques further to increase its adoption and effectiveness in various applications.
YDLIDAR X2 [19] was selected for this study due to its affordability and high performance. It meets most application needs regarding ranging capability, and its measurement accuracy and angular resolution allow it to precisely capture information about the surrounding environment. Its low power consumption also makes it particularly suitable for battery-powered mobile devices.
2.2. Trigonometric Distance Calculation
YDLIDAR X2 utilizes the trigonometric measurement principle, combined with related optical, electrical, and algorithmic designs, to achieve high-frequency and high-precision distance measurements. While measuring distances, the mechanical structure rotates 360 degrees, continuously acquiring angle information, thereby enabling 360-degree scanning and ranging. The system outputs point cloud data of the scanned environment. The principle of trigonometric measurement is illustrated in Figure 1.
The laser emitter projects a laser beam, which, upon hitting an object, is reflected and received by a Charge Coupled Device (CCD). Since there is a distance between the laser and the detector, objects at different distances will be imaged at different positions on the CCD according to the optical path. The distance to the measured object can be derived using the trigonometric formula. Based on the ranging principle, the distance from the LiDAR to the measured object can be expressed as follows [29]:
(1)
where f represents the focal length of the receiving lens in millimeters, L is the offset between the optical axis of the emitted light path and the principal axis of the receiving lens, and d is the position offset on the receiving CCD. As the offset d increases, the ranging resolution deteriorates exponentially. Therefore, the accuracy of the trigonometric measurement method decreases as the distance to the measured object increases.2.3. Triangulation Algorithm
Triangulation essentially uses the hyperbolic method to calculate the intersection point of three circles, as illustrated in Figure 2. The intersection point represents the measured coordinate point. When the distances from the measured coordinate point to the three reference points are accurately determined, an algorithm can be employed to estimate the correct position of the calculated coordinates.
Let the coordinates of the reference nodes be A, B, and C, and the unknown node coordinates be . The distances from the reference nodes to the unknown node, denoted as , , and , can be expressed as follows [30]:
(2)
(3)
and(4)
The system of equations can be set up as follows. Using Equation (2) through (4), the estimated position of the unknown node can be derived through matrix manipulation. The matrix representation of the system is as follows:
(5)
in(6)
and(7)
Because this positioning method can provide precise location estimates when there are no errors in the three measured distances, it becomes problematic when distance measurement errors are present. In such cases, it is impossible to determine a precise coordinate point, as illustrated in Figure 3. Due to the measurement errors in the distances, the estimated position is not a single fixed point but instead falls within a range between points , , and .
3. Error Cancelation Applied to Indoor Positioning
3.1. Error Cancelation (EC)
The error is within the centimeter or even millimeter range when the LiDAR measures reference points on a vertical surface. However, uncontrollable error ranges occur when measuring reference points on an inclined surface, with errors exceeding 10 cm, even at close distances. Therefore, to ensure that the surface from the LiDAR to the reference points is a vertical plane, we propose an EC method. The goal is to collect data within a stable error range, thereby significantly reducing measurement distance errors during positioning.
All the experiments in this study were conducted in Room 209.1 of the Chaoyang Information Building. The experiments were conducted in a LOS environment, measuring 7647 mm by 5970 mm. In the experimental environment, three reference points, A, B, and C, and nine measurement points, E, G, H, …, and N, were set up, as shown in Figure 4. The red triangles represent the reference points, and the blue dots represent the measurement points. All the coordinates of the reference points and measurement points are listed in Table 2.
The experiments in this study were divided into three parts: (1) measurement of LiDAR’s distance error and angular slope error, (2) triangulation experiment using LiDAR in a LOS environment, and (3) application of the EC method in indoor positioning. The experimental measurement process is illustrated in Figure 5. At the beginning of the experiment, the initial setup involves connecting the YDLIDAR X2 to the ESP32. The ESP32 receives Wi-Fi and requests the YDLIDAR X2 to perform a scan. The scan results are converted into strings and transmitted to a local machine via Wi-Fi. Upon receiving the results, the local machine filters the necessary data and calculates the positioning coordinates using a triangulation algorithm based on the required angular distances. The computed coordinates are then stored in a database. The local machine determines whether the desired number of measurements has been reached; for this experiment, 500 data points were collected to observe the distribution of errors. Once this threshold is reached, the measurement process is concluded.
The third experiment is divided into two parts: the training procedure and the testing procedure. At the beginning of the training procedure, the initial setup is identical to that in the previous measurement process. The YDLIDAR X2 is connected to the ESP32, which receives Wi-Fi signals and requests the YDLIDAR X2 to perform a scan. The scan results are converted into strings and transmitted to the local machine via Wi-Fi. Upon receiving the results, the local machine filters the required data and stores it in a database. Once the necessary amount of data is collected, which is 200 data points per location, the data is averaged. The averaged data is then compared to the actual distance, and the difference between the two gives the offset error value for the distance between the measurement point and the reference point.
The key performance parameters of YDLIDAR X2 are as follows: a.. Ranging frequency: 3000 times per second; b.. Motor frequency: 6 Hz, control with PWM signal; c.. Ranging distance: 0.12–8 m; d.. Field of view: 0–360 Deg.; e.. Angle resolution: 0.72 Deg. (frequency @ 6 Hz); f.. Supply voltage: 4.8–5.2 V.
3.2. Reference Point Update Method (RPUM)
Due to the reference point update errors observed in the dynamic positioning experiments, we propose a Reference Point Update Method to reduce the error values in dynamic positioning. At the start of the experiment, we updated the reference points in the regions. Taking Region N as an example, we learned from the previous experiments that the dynamic positioning performance is better near the region’s center. Therefore, we expanded the region’s reference points by adding three new reference points on each side of the center point, creating new reference points N1, N2, …, N6. This process was repeated for each region to update the reference points. Next, dynamic positioning was performed laterally and longitudinally, with 20 measurement data points collected in each direction. The comparison between the positioning trajectory and the actual trajectory was then observed.
The Reference Point Update Method (RPUM) algorithm is as follows:
Input: LiDAR distance measurements to 3 reference points
Output: Estimated position (x, y) Step 1: Initialization and Calibration Setup Reference Points Define 3 anchor points with known positions: Anchor A: () Anchor B: () Anchor C: ) Calibration Parameters (One-time setup) Measure at known test and reference points in area, calculate:
Step 2: Real-time Data Filtering Get Raw Measurements Read from LiDAR: //distances to 3 anchors Apply Exponential Moving Average (EMA) For each distance measurement:
Apply Distance Calibration
Step 3: Position Calculation (Trilateration) Solve Equations Given three circles:
Linear Solution
Subtract equations to eliminate quadratic terms:
Solve for raw position:
Apply Position Calibration
Step 4: Go to Step 2.
The most significant improvement was in Region M, where the average positioning error decreased from 151.75 mm to 17.57 mm. Moreover, the two outermost positioning points in Region L were corrected to closely match the actual dynamic trajectory. The average dynamic positioning error in the lateral direction decreased from 96.7 mm to 18.38 mm, resulting in an 81% improvement. The dynamic positioning trajectory graphs show that the positioning points with significant deviations between the EP and RP trajectories were corrected after applying the Reference Point Update Method. All dynamic positioning points in the r_EP trajectory closely matched the actual trajectory. Furthermore, the overall average positioning error decreased from 208.71 mm to 20.34 mm, resulting in a 90% improvement. This demonstrates that the Reference Point Update Method effectively enhances dynamic positioning accuracy.
4. Instant Indoor Positioning
4.1. Fixed Point Positioning
In the experiment, the three points that performed better laterally and longitudinally were selected for real-time indoor positioning. N, M, and L were chosen for the lateral points, and for the longitudinal points, G, I, and L were selected, as shown in Figure 6. Since the plane created at the reference points cannot rotate with the movement of the LiDAR, the middle value of the plane was kept stationary in this experiment. Additionally, the error cancelation method was applied to reduce positioning errors. Twenty data points were used as the training set, and one hundred data points were used as the test set to observe the CDF of each point and their positioning trajectories.
Let us first look at the static positioning in the lateral direction. As shown in Figure 7, the positioning error at point L decreased to within 20 mm after applying the EC method. Figure 8 shows that the positioning error at point M dropped from over 50 mm to within 20 mm after applying EC. Similarly, in Figure 9, the positioning error at point N decreased from over 60 mm to around 20 mm after applying EC, and the measured position was adjusted from the right side of the actual measurement point back to the vicinity of the actual measurement point.
Figure 10 and Figure 11 show that the positioning errors at points L, M, and N, which initially ranged from 50 mm to 60 mm, have all decreased to below 20 mm on average, with stable values.
Next, let us examine the static positioning in the longitudinal direction. It is omitted here since point L overlaps with the previous lateral analysis. As shown in Figure 12, the positioning error at point G decreased from around 70 mm to approximately 20 mm after applying the EC method. In Figure 13, the positioning error at point I dropped from above 30 mm to within 30 mm after applying EC, and the measured positioning points became more concentrated around the actual measurement point.
Figure 14 and Figure 15 show that the positioning errors at points G, I, and L, which initially had maximum mistakes of up to 70 mm, have all decreased to below 30 mm on average, with stable values.
Figure 16 and Figure 17 display the dynamic positioning trajectories in the longitudinal and lateral directions, respectively, where RP represents the actual movement trajectory, and EP represents the estimated dynamic positioning trajectory.
Figure 16 shows that the positioning for the two to three points closest to the region settings around points G, I, and L is good. In contrast, the more distant points exhibit more significant positioning deviations. For example, the two points near I are shifted to the upper left in the G region due to the reference point update error during region identification. It is also worth noting that in each area, the lower two to three points exhibit more significant positioning deviations. The known point causes this increased deviation of C’s wall, resulting in a noticeable step that leads to an increase in positioning error. The average dynamic positioning error in the longitudinal direction is 320.7 mm.
In Figure 17, it is shown that the positioning of all six points in the N region is relatively stable and closely matches the actual measurement trajectory. In the L region, only the two points farther from the region’s center exhibit more significant positioning deviations. In contrast, the points closer to the center show positioning closely following the actual measurement trajectory. In the M region, points on the left side of the center exhibit more significant positioning deviations due to reference point update errors, whereas points on the right-side show relatively stable positioning, closely matching the actual measurement trajectory. The average dynamic positioning error in the lateral direction is 96.72 mm.
4.2. Dynamic Point Positioning
The experimental results in the previous subsection showed that when moving in the straight and horizontal directions, some test points deviated significantly from the original path. The main reason for this was the uneven reflection surface of the reference point C. To solve this problem, we added six points (N1, N2, …, N6) at equal intervals in the horizontal direction and six points at equal intervals in the vertical direction, forming a region with the optimal test point N as the center. We calculated the error elimination offset value of the region and the offset value of the distance measurement to the three reference points for the center point of each region. This allowed the RPUM algorithm proposed in the previous section to obtain the correct distance value. The experimental results showed a significant improvement in the error problem.
Figure 18 and Figure 19 display the dynamic positioning trajectories after applying the Reference Point Update Method in the longitudinal and lateral directions, respectively. Here, RP represents the actual movement trajectory, EP represents the estimated dynamic positioning trajectory, and r_EP represents the estimated dynamic positioning trajectory after applying the Reference Point Update Method.
Figure 18 shows that the positioning deviations caused by reference point errors at the outskirts of Regions G, I, and L were corrected after applying the Reference Point Update Method. The most significant improvement was observed in Region G, where the average positioning error decreased from 312 mm to 32 mm. Additionally, the positioning points at the outskirts of Regions I and L were corrected to closely follow the actual dynamic trajectory. The average dynamic positioning error in the longitudinal direction decreased from 320.7 mm to 22.3 mm, resulting in a 93% improvement.
In Figure 19, the positioning deviations in Regions M and L caused by reference point errors were also corrected after applying the Reference Point Update Method. The most significant improvement was in Region M, where the average positioning error decreased from 151.75 mm to 17.57 mm. Moreover, the two outermost positioning points in Region L were corrected to closely match the actual dynamic trajectory. The average dynamic positioning error in the lateral direction decreased from 96.7 mm to 18.38 mm, resulting in an 81% improvement.
The dynamic positioning trajectory graphs show that the positioning points with significant deviations between the EP and RP trajectories were corrected after applying the Reference Point Update Method. All dynamic positioning points in the r_EP trajectory closely matched the actual trajectory. Furthermore, the overall average positioning error decreased from 208.71 mm to 20.34 mm, resulting in a 90% improvement. This demonstrates that the Reference Point Update Method effectively enhances the accuracy of dynamic positioning.
5. Conclusions
This study aims to investigate the positioning errors of LiDAR in indoor environments through 2D indoor positioning experiments. We proposed creating a plane at known points to maintain a vertical alignment between the measurement points and the known points. We introduced an error cancelation method to offset the estimated positioning errors. The experimental results demonstrate that the proposed error cancelation method and plane creation can effectively reduce the positioning errors of LiDAR in indoor environments, with significant improvements observed at all 17 measurement points. Points with previously significant positioning errors exceeding 100 mm were improved to nearly 10 mm.
Moreover, using the previously measured data, we divided the indoor space into regions for real-time indoor positioning. The positioning errors were reduced to approximately 30 mm to 20 mm in the static positioning performance. In the dynamic positioning results, both longitudinal and lateral, the positioning points near the center of the region closely matched the actual path. However, points further from the region center exhibited more significant positioning deviations. Additionally, the existing positioning deviations were exacerbated by the experimental environment. To address this, we proposed a reference point updating method, which reduced the average dynamic positioning error from 208.71 mm to 20.34 mm, achieving a 90% improvement.
Conceptualization, Y.-F.H., C.-M.C., J.-Y.L. and T.-J.C.; Methodology, Y.-F.H., C.-M.C. and J.-Y.L.; Software, J.-Y.L.; Validation, Y.-F.H.; Formal analysis, Y.-F.H.; Investigation, J.-Y.L.; Resources, T.-J.C.; Data curation, C.-M.C.; Writing—original draft, C.-M.C. and T.-J.C.; Writing—review & editing, Y.-F.H., C.-M.C. and T.-J.C.; Visualization, T.-J.C.; Supervision, Y.-F.H. All authors have read and agreed to the published version of the manuscript.
The original contributions presented in the study are included in the article; further enquiries can be directed to the corresponding author.
This research was supported by the National Science and Technology Council (NSTC), Taiwan, with grant number NSTC 112-2221-E-324-010.
The authors declare no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1 Principle of trigonometric distance measurement.
Figure 2 Coordinate estimation of triangulation positioning.
Figure 3 Three intersection points are formed by distance measurement errors during triangulation.
Figure 4 Experimental area with three reference points and nine measurement points.
Figure 5 Measurement flow chart.
Figure 6 Coordinate diagram of fixed-point positioning.
Figure 7 L fixed point positioning trajectory.
Figure 8 M fixed point positioning trajectory.
Figure 9 N fixed point positioning trajectory.
Figure 10 Uncorrected horizontal fixed-point positioning CDF.
Figure 11 Horizontal fixed-point positioning CDF with EC added.
Figure 12 G fixed-point positioning trajectory.
Figure 13 I fixed-point positioning trajectory.
Figure 14 Uncorrected longitudinal fixed-point positioning CDF.
Figure 15 Longitudinal fixed-point positioning CDF with EC added.
Figure 16 Horizontal dynamic positioning trajectory.
Figure 17 Longitudinal dynamic positioning trajectory.
Figure 18 Comparison of longitudinal dynamic positioning trajectories by adding reference point update method.
Figure 19 Comparison of horizontal dynamic positioning trajectories by adding reference point update method.
Comparison of various indoor positioning technologies.
| Indicator | Bluetooth | Wi-Fi | UWB | RFID | LiDAR |
|---|---|---|---|---|---|
| Accuracy | Meter | Meter | Centimeter | Meter | Centimeter |
| Power Consumption | Low | Low | Medium | Low | Medium |
| Cost | Low | Low | High | Low | Medium |
| Popularity | High | High | Medium | Medium | High |
| Security | Medium | Medium | Low | Low | High |
Coordinate data of measuring points and reference points (mm).
| Measuring Point | X Coordinate | Y Coordinate |
|---|---|---|
| A | 0 | 1406 |
| B | 2536 | 7647 |
| C | 5970 | 1406 |
| D | 2523 | 4419 |
| G | 3352 | 4433 |
| H | 1748 | 4411 |
| I | 3386 | 3733 |
| J | 2587 | 3720 |
| K | 1751 | 3703 |
| L | 3394 | 3092 |
| M | 2603 | 3058 |
| N | 1764 | 3035 |
1. Boman, J.; Taylor, J.; Ngu, A.H. Flexible IoT Middleware for Integration of Things and Applications. Proceedings of the 10th IEEE International Conference on Collaborative Computing: Networking, Applications and Workshops; Miami, FL, USA, 22–25 October 2014; pp. 481-488.
2. Gui, L.; Val, T.; Wei, A.; Taktak, S. An Adaptive Range–Free Localisation Protocol in Wireless Sensor Networks. Int. J. Ad Hoc Ubiquitous Comput.; 2014; 15, pp. 38-56. [DOI: https://dx.doi.org/10.1504/IJAHUC.2014.059906]
3. Kharidia, S.A.; Ye, Q.; Sampalli, S.; Cheng, J.; Du, H.; Wang, L. HILL: A Hybrid Indoor Localization Scheme. Proceedings of the 10th International Conference on Mobile Ad-hoc and Sensor Networks; Maui, HI, USA, 19–21 December 2014; pp. 201-206.
4. Huang, H.; Zhou, J.; Li, W.; Zhang, J.; Zhang, X.; Hou, G. Wearable Indoor Localisation Approach in the Internet of Things. IET Netw.; 2016; 5, pp. 122-126. [DOI: https://dx.doi.org/10.1049/iet-net.2016.0007]
5. Herman, G.T. A Survey of 3D Medical Imaging Technologies. IEEE Eng. Med. Biol. Mag.; 1990; 9, pp. 15-17. [DOI: https://dx.doi.org/10.1109/51.105212] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/18238351]
6. Michael, H.; Mueller, A.; Luettel, T.; Wunsche, H.J. LIDAR-based 3D Object Perception. Proceedings of the 1st International Workshop on Cognition for Technical Systems; Munich, Germany, 6–8 October 2008; Volume 1.
7. Huang, Y.-F.; Lin, Y.-J.; Liao, J.-Y.; Lin, F.-L.; Lim, Z.Y. An Error Cancellation Method for LiDAR Technology Based Trilateration Algorithms in Indoor Positioning. Proceedings of the 2025 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW); Kaohsiung, Taiwan, 16–19 July 2025; pp. 467-468.
8. Schröder, Y.; Wolf, L. InPhase: Phase-based Ranging and Localization. ACM Trans. Sens. Netw.; 2022; 18, pp. 1-39. [DOI: https://dx.doi.org/10.1145/3494542]
9. Li, Z.; Liu, Y.; Chang, F.; Zhang, J.; Lu, M. Research on UWB and LiDAR Fusion Positioning Algorithm in Indoor Environment. Comput. Eng. Appl.; 2021; 57, pp. 260-266.
10. Chen, C.-M.; Huang, Y.-F.; Jheng, Y.-T. An Efficient Indoor Positioning Method with the External Distance Variation for Wireless Networks. Electronics; 2021; 10, 1949. [DOI: https://dx.doi.org/10.3390/electronics10161949]
11. Huang, Y.-F.; Hsu, Y.-H.; Lin, J.-Y.; Chen, C.-M. A Novel Adaptive Indoor Positioning Using Mobile Devices with Wireless Local Area Networks. Electronics; 2024; 13, 895. [DOI: https://dx.doi.org/10.3390/electronics13050895]
12. Khan, M.U.; Zaidi, S.A.A.; Ishtiaq, A.; Bukhari, S.U.R.; Samer, S.; Farman, A. A Comparative Survey of LiDAR-SLAM and LiDAR-based Sensor Technologies. Proceedings of the 2021 Mohammad Ali Jinnah University International Conference on Computing; Karachi, Pakistan, 15–17 July 2021; pp. 1-8. [DOI: https://dx.doi.org/10.1109/MAJICC53071.2021.9526266]
13. Wang, Y.-T.; Peng, C.-C.; Ravankar, A.A.; Ravankar, A. A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm. Sensors; 2018; 18, 1294. [DOI: https://dx.doi.org/10.3390/s18041294] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29690624]
14. Sánchez, E.; Botsch, M.; Huber, B.; García, A. High Precision Indoor Positioning using LiDAR. Proceedings of the 2019 DGON Inertial Sensors and Systems Symposium; Braunschweig, Germany, 10–11 September 2019; pp. 1-20. [DOI: https://dx.doi.org/10.1109/ISS46986.2019.8943731]
15. Karfakis, P.T.; Couceiro, M.S.; Portugal, D. NR5G-SAM: A SLAM Framework for Field Robot Applications Based on 5G New Radio. Sensors; 2023; 23, 5354. [DOI: https://dx.doi.org/10.3390/s23115354] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37300084]
16. Hansard, M.; Lee, S.; Choi, O.; Horaud, R. Time-of-Flight Cameras: Principles, Methods and Applications; Springer: London, UK, 2012; [DOI: https://dx.doi.org/10.1007/978-1-4471-4658-2]
17. Droeschel, D.; Stückler, J.; Holz, D.; Behnke, S. Towards Joint Attention for a Domestic Service Robot—Person Awareness and Gesture Recognition Using Time-of-Flight Cameras. Proceedings of the 2011 IEEE International Conference on Robotics and Automation; Shanghai, China, 9–13 May 2011; pp. 1205-1210. [DOI: https://dx.doi.org/10.1109/ICRA.2011.5980067]
18. Huang, Y.-F.; Chang, S.-W.; Sheu, Y.-H. Performance of Adaptive Offset Cancellation Method for UWB Indoor Positioning System. Proceedings of the 2023 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW); Pingtung, Taiwan, 17–19 July 2023; pp. 589-590.
19. Shenzhen EAI Technology. YDLIDAR X2 Data Sheet, ydlidar.com. 2024; Available online: https://www.ydlidar.com/download/category/triangulation (accessed on 1 November 2025).
20. Yoshizawa, T.; Fukiya, Y.; Shinkuma, R.; Trovato, G. Estimation of Human Mobility State Using Multi-LiDAR Sensor Network. Proceedings of the 2025 Fifteenth International Conference on Mobile Computing and Ubiquitous Networking (ICMU); Busan, Korea, 10–12 September 2025; pp. 1-6. [DOI: https://dx.doi.org/10.23919/ICMU65253.2025.11219157]
21. Ikeda, K.; Hayakawa, Y.; Suzuki, R.; Nagai, S.; Sako, O.; Nagata, R. Optical LiDAR Communication: Repurposing Existing LiDAR Sensors for Infrastructure-to-Vehicle Communication. IEEE Robot. Autom. Lett.; 2025; 10, pp. 12732-12739. [DOI: https://dx.doi.org/10.1109/LRA.2025.3619748]
22. Srinara, S.; Chiu, Y.T.; Tsai, M.L.; Chiang, K.W. High-Definition Point Cloud Map-Based 3D LiDAR-IMU Calibration for Self-Driving Applications. ISPRS Arch.; 2022; 43, pp. 271-277. [DOI: https://dx.doi.org/10.5194/isprs-archives-XLIII-B1-2022-271-2022]
23. Rotter, P.; Klemiato, M.; Skruch, P. Automatic Calibration of a LiDAR–Camera System Based on Instance Segmentation. Remote Sens.; 2022; 14, 2531. [DOI: https://dx.doi.org/10.3390/rs14112531]
24. Gu, Q.; Zhou, Y.; Zhao, H.; Ma, S.; Sun, W.; Liu, D. Automatic calibration of airborne LiDAR installation angle based on point cloud conjugate matching. J. Phys. Conf. Ser.; 2024; 2718, 12015. [DOI: https://dx.doi.org/10.1088/1742-6596/2718/1/012015]
25. Fan, X.; Tang, J.; Wen, J.; Xu, X.; Liu, H. An incremental LiDAR/POS online calibration method. Meas. Sci. Technol.; 2023; 34, 85201. [DOI: https://dx.doi.org/10.1088/1361-6501/accc21]
26. Long, Z.; Xiang, Y.; Lei, X.; Li, Y.; Hu, Z.; Dai, X. Integrated Indoor Positioning System of Greenhouse Robot Based on UWB/IMU/ODOM/LIDAR. Sensors; 2022; 22, 4819. [DOI: https://dx.doi.org/10.3390/s22134819] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35808314]
27. Holmberg, M.; Karlsson, O.; Tulldahl, M. Lidar Positioning for Indoor Precision Navigation. Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); New Orleans, LA, USA, 19–20 June 2022; pp. 358-367. [DOI: https://dx.doi.org/10.1109/CVPRW56347.2022.00051]
28. Huai, J.; Shao, Y.; Zhang, Y.; Yilmaz, A. A Low-Cost Portable Lidar-based Mobile Mapping System on an Android Smartphone. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci.; 2025; X-G-2025, pp. 375-381. [DOI: https://dx.doi.org/10.5194/isprs-annals-X-G-2025-375-2025]
29. Cong, D.; Zhang, L.; Su, P.; Tang, Z.; Meng, Y.; Wang, Y. Design and Implementation of LiDAR Navigation System based on Triangulation Measurement. Proceedings of the 2017 29th Chinese Control and Decision Conference; Chongqing, China, 28–30 May 2017; pp. 6060-6063. [DOI: https://dx.doi.org/10.1109/CCDC.2017.7978258]
30. Shi, Q.Q.; Huo, H.; Fang, T.; Li, D.R. Using Linear Intersection for Node Location Computation in Wireless Sensor Networks. Acta Autom. Sin.; 2006; 32, pp. 907-914.
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.