1. Introduction
With their flexible altitude and immunity to ground obstacles [1], unmanned aerial vehicles (UAV) are currently widely used in inspection [2], search missions [3], and cargo measurements [4]. In most of these applications, UAVs use GNSS as the prime source of positioning information. However, underground or indoor tunnels represent GNSS-denied environments, and their low lighting, featurelessness, and high consistency also make UAV positioning more challenging [5]. Large hydroelectric power stations and air raid shelters often contain extensive long-distance tunnels (as shown in Figure 1) and drainage pipes that require regular inspection to ensure that cracks, water seepage, landslides, etc., can be detected immediately. Considering the presence of stairs and potential deep-water accumulation in these tunnels, UAVs have more application advantages than unmanned ground vehicles (UGVs) and quadruped robots for these inspection tasks. Currently, the positioning methods in the GNSS-denied environment can be briefly classified into two categories: (1) positioning by external landmarks with known location information; (2) positioning by on-board sensors.
For external landmarks, Wi-Fi [6], RF backscatter [7], and LED light [8] are useful sources for UAVs to obtain position estimates in low-lighting environments. However, because of the limited range, these landmarks are only suitable for use in small spaces. Ultra-wideband (UWB) is more suitable for positioning in large-scale GNSS-denied scenes [9], but the positioning accuracy will be affected after the signals pass through walls. In addition, laying a large number of UWB landmarks in long tunnels also creates a high cost. Besides static landmarks, Wang et al. [10] and Chen et al. [11] obtained the global position of the UAV by positioning it relative to a vehicle with known position information, but, in this context, the flight area of the UAV will be limited to the area accessible to the vehicle.
For positioning via the UAV’s own on-board sensors, visual SLAM and LiDAR SLAM are the most common localization algorithms for UAVs in GNSS-denied environments. Visual SLAMs, such as ORB-SLAM3 [12] and VINS-Fusion [13], estimate the position based on tracking the key points in the image. However, due to the low light and high grey consistency in the tunnel environment, the quality of the key points is poor, resulting in missing tracking or the incorrect matching of the key points. Moreover the performance of visual SLAMs will be significantly impacted by the absence of reliable key points. On the other hand, the existing excellent LiDAR SLAMs, such as Faster-Lio [14] and DLIO [15], typically only use LiDAR and IMU for position estimation, which is generally accurate in most environments. However, tunnel-like environments also represent a degeneracy environment for LiDAR SLAM, because the point clouds are highly consistent in each part of the long tunnel.
To this end, we approach a multi-sensor fusion method by adding an optical flow sensor to solve the problem whereby the above traditional SLAMs cannot obtain an accurate position estimate in tunnel-like degeneracy environments (Figure 1). Our contributions are as follows:
We present a direction-separated data fusion approach that is capable of fusing optical flow odometry in the degeneracy direction of LiDAR SLAM only, ensuring the accuracy of the position estimates of the whole system;
We present a low-computation failure and recovery detector that enables our data fusion approach to perform fast switching between degeneracy environments and general environments, as well as an offset module to guarantee smoothness during switching;
The performance of our approach is verified through real-world UAV experiments against the state of the art, as shown in the video
https://youtu.be/ITytYCW8y5w (accessed on 21 July 2024).
For benefit of the community, we have provided our code in open-source format.
2. Related Work
Since only one camera is needed as a hardware device, the optical flow method has been widely used in unmanned vehicles. However, due to the optical flow method estimating the distance of movement at the image scale rather than the real-world scale, it is mostly used for target recognition and depth estimation, rather than the vehicle or position estimation of the vehicle itself. For example, OF-VO [16] used LiDAR for position estimation and the optical flow method for pedestrian detection and vehicle estimation to generate an obstacle avoidance navigation path. Jiang et al. [17] and Liu et al. [18] used the optical flow method to generate a pseudo-LiDAR point cloud. In terms of restoring the real-world scale and using the optical flow method for position estimation, this can be achieved by adding it to the network together with LiDAR. FuseMODNet [19] proposes a CNN architecture for the fusion of RGB images, RGB flows, and LiDAR flows to estimate the velocity of dynamic objects. Pandya et al. [20] fused mmWave FMCW radar and optical flow data for velocity estimation. However, due to the limitations of UAVs in terms of payload, it is not suitable to deploy a network that places significant computing costs on UAVs. Therefore, estimating the velocity directly without relying on the network is more suitable for UAV use. Zheng et al. [21] used UAV attitude data and altitude data with optical flow for velocity estimation, but the altitude data were estimated from visual odometry, which is unsuitable for the low-lighting tunnel-like environment. Yun et al. [22] used LiDAR to estimate the distance and flatness of the ground, which was used to restore the position estimate via the optical flow to the real-world scale. Inspired by these approaches, we use the attitude data of a UAV, distance sensor, and camera with an optical flow method for position estimation in the degeneracy direction.
Filters, such as the Kalman filter, are a common tool for multi-sensor data fusion. Premachandra et al. [9] used an extended Kalman filter (EKF) to fuse UWB and radar. Du et al. [23] also used an EKF to fuse multiple heterogeneous sensors to solve the problem of position estimation in multi-environments. Zhen et al. [24] used a Gaussian particle filter to obtain position measurements from UWB distance measurements and fused them with LiDAR through an error state Kalman Filter (ESKF). However, these approaches do not enable targeted fusion in the degeneracy direction of prime sensors, because the sensors are assumed to be uniformly reliable in all directions. Therefore, for a tunnel-like environment, Kim et al. [25] used a point landmark map to locate the lane information of the vehicle and constrained the position estimates of other sensors so as not to deviate from the lane in the lateral direction. In addition, Zhang et al. [26] defined a degeneracy factor to separate degenerate directions in the state space. After this, a nonlinear optimization method was used to solve the problem with the sensor data in well-conditioned directions, avoiding being affected by the errors introduced by the sensor data in the degenerate direction. Hinduja et al. [27] improved this approach and applied it to underwater SLAM with sonar sensors. XICP [28] proposes a fine-grained localizability detection and localizability-aware constrained ICP optimization module for an underground mine-like LiDAR degeneracy environment. However, the code is not available in open-source format yet. Inspired by these approaches, we propose a degenerate direction-separated data fusion method with lower computing costs that is more suitable for tunnel-like degeneracy environments.
3. Method
3.1. System Overview
LOFF is an odometry method that combines LIDAR SLAM and optical flow odometry with the intention of improving the performance of traditional LiDAR SLAM in tunnel-like degeneracy environments. Our approach comprises three modules, as shown in Figure 2. The first module is a low-computation and fast-response failure detector designed to separate the erroneous mutation data produced by LiDAR SLAM in the degeneracy environment to guarantee the flight stability of the UAV (Section 3.3). The second is an odometry fusion module (Section 3.2), which replaces the failed direction of LiDAR SLAM with a single-direction optical flow odometry. Additionally, this module encompasses a direction detector to identify the degeneracy direction. The third module is a recovery detector, used to detect whether the UAV has exited the degeneracy environment so that LiDAR SLAM can be re-used as the primary output of the state. The offset recorder in this module will guarantee UAV stability during switching (Section 3.3).
Our experimental platform is an assembled quadrotor UAV with Pixhawk’s 6C flight control unit, as shown in Figure 3. The LiDAR at the top of it is the LIVOX MID-360, which provides 360-degree non-repetitive scanning point cloud data for LiDAR SLAM. The optical flow module, at the bottom of the UAV, consists of a Benewake TFmini Plus distance sensor, a monocular camera, and an LED light. Depending on the requirements of the experimental environment, this module can be installed either at the bottom or on the side of the UAV. To compare with the visual SLAM, a RealSense T265 was installed at the front of the UAV. Notably, it did not contribute data to our approach. Inside the UAV, we used an Intel i7 computer, running Ubuntu 20.04 LTS and ROS, to process the above sensor data and deploy our approach.
3.2. Direction-Separated Data Fusion
Due to the consistency problem of point clouds along the axial direction in tunnel-like degeneracy environments, the iterative closest point (ICP)-based LiDAR SLAM will eliminate one degree of freedom [28]. Therefore, in contrast to traditional data fusion algorithms (such as the extended Kalman filter [29], graph optimization [30], etc.), our approach directly isolates the odometry data in the degeneracy direction from LiDAR SLAM.
Firstly, a detector identifies the degeneracy direction (the axial direction of the tunnel) so that the position data along this direction can be removed in the next step, as shown in Figure 4. To minimize the computation time, we employ a small detection box centred on the UAV. After this, each LiDAR point within the detection box forms a surface with its K-nearest neighbors (KNN), and their properties (normal and curvature of the surface) are estimated [31]. To mitigate errors introduced by obstacles on the wall, we estimate the normal of the tunnel wall by averaging the normal vectors of the 30 flattest surfaces. The handing deviation angle (in radians) is computed as follows:
(1)
where represents the unit vector in the normal direction of the wall in the UAV body frame. Since the walls are generally perpendicular in tunnels and the pitch and roll angles of our UAV are small during low-speed flight, we omit the projection process onto the UAV X-Y plane in practical applications. Subsequently, the degeneracy direction can be calculated via(2)
where denotes the quaternion representing the UAV body frame with respect to the world frame, and we define the quaternion as the “degeneration frame” with respect to the world frame. ⊗ is the quaternion multiplication operator, described in [32].Secondly, we propose a direction-separated data fusion approach to fuse optical flow odometry with LiDAR SLAM. The degeneracy environments of LiDAR SLAM are usually divided into three situations: translational degeneracy, rotational degeneracy, and combined degeneracy [28]. Tunnel-like environments belong to the first type. In such environments, the point clouds, received from LiDAR, are almost the same when the sensor moves along the axial direction. Consequently, both point-to-point ICP [33], and plane-to-plane GICP [34]-based LiDAR SLAM lose accuracy in this direction, as shown in Figure 5. Therefore, unlike traditional data fusion, we consider LiDAR SLAM positioning in the degeneracy direction as interference. The fused position estimate can be calculated by replacing the degeneracy direction from (2) of LiDAR odometry with optical flow odometry via
(3)
where the homogeneous vectors of optical flow odometry in the degeneration frame and LiDAR SLAM odometry in the world frame are defined as(4)
where , , and represent the position estimates of LiDAR odometry, and and denote the increment in optical flow odometry between samples. As shown in Figure 6, is calculated by projecting the optical flow odometry in the degeneration direction. In practical applications, according to the high sampling time and low flight speeds, the trajectory of the UAV can be approximated as the sum of multiple straight lines.The final section of this module records the offset between the position estimates of LiDAR SLAM and optical flow odometry. LiDAR SLAM and optical flow odometry form a loosely coupled system that operates independently, with their position estimations not interfering with each other. Therefore, because of the different positioning accuracies, even when UAVs follow the same path, their position estimates will have different results, leading to an abrupt change in the position estimate of LOFF when switching the sources of position data. Therefore, the last position estimate of LIDAR SLAM before failure and the offset between the state estimates of UAV and LiDAR SLAM are introduced to maintain the smoothness of the UAV’s state estimation during flight. By incorporating these into (3), the position estimate can be written as
(5)
where(6)
3.3. LiDAR Odometry Failure and Recovery Detector
Quadrotor UAVs exhibit high dynamic responsiveness and are simultaneously intolerant to collisions; therefore, an accurate and real-time state estimate, especially a velocity and attitude estimate, is important; otherwise, a crash may occur. As mentioned in Figure 5, ICP-based LiDAR SLAMs may produce inaccurate position estimates in degeneracy environments, leading to the loss of control over the UAV’s speed. Hence, a low-computation and fast-response LiDAR odometry failure detector was designed, and the optical flow odometry will dominate the position estimation in the degeneracy direction until a “recovery” signal is received from the recovery detector.
The failure detector operates by assessing the error between the LiDAR odometry and the integration of the IMU. Upon surpassing a predetermined threshold, the detector issues a signal. Considering the error accumulation effect of the IMU [35], we compute the error between two LiDAR odometry frames via
(7)
where and denote the position estimates of LiDAR odometry in the and k frame, respectively. represents the position integration of IMU measurements between these two frames via [15](8)
where the quaternion q denotes the IMU body frame with respect to the world frame, while a and represent the linear acceleration and angular velocity measurements of the IMU, respectively. Since the IMU sampling frequency exceeds that of LIDAR odometry, for N number of IMU measurements between two LiDAR odometry frames. We also experimented with the use of optical flow odometry and the covariance of LiDAR point clouds as the failure detector. However, neither proved as accurate or responsive as the current detector.To ensure the UAV’s flight safety, we only generate the recovery signal after completely exiting the degeneracy environment. In tunnel-like environments, LiDAR odometry fails due to the absence of walls or obstacles in the degeneracy direction for position estimation. Therefore, we calculate the surface normal vectors of the point clouds [31] both in front of and behind the UAV. After this, if there are enough of those vectors perpendicular to the degeneracy direction (calculated in (2)), the recovery signal is activated.
To summarize, by combining (5) and the detectors mentioned above, the complete procedure of our LiDAR and optical flow odometry fusion approach can be described as in the pseudo-code in Algorithm 1.
Algorithm 1: LiDAR and optical flow odometry fusion approach |
3.4. Optical Flow Odometry
In our approach, optical flow odometry is utilized primarily to estimate the position in the degenerate direction. Tunnel-like environments, characterized by low light conditions and featureless surroundings, represent degeneracy environments for visual SLAM as well (the performance is shown in Section 4.3). Therefore, we use optical flow odometry instead, which can estimate the velocity of UAVs when even a few key points are available. Optical flow odometry employs the Lucas–Kanade method to obtain the pixel motion distances of key points in the image and then utilizes a distance sensor to measure the distance d between the UAV and the wall, as depicted in Figure 7. The position estimate at a real-world scale is given by multiplying the pixel motion distance, distance, and magnification factor:
(9)
where the quaternion represents the UAV body frame with respect to the camera frame, and the magnification factor f can be obtained through calibration. To reduce the interference of noise, a smoothing filter is used in the optical flow estimation of images. In addition, the position estimate in (9) is passed as an observation to the Kalman filter and combined with the IMU data to obtain a more accurate optical flow odometry position estimate.4. Experiments and Results
This section presents the results of the real-world experiments to verify the feasibility of each module and evaluate the overall performance of LOFF. This work focuses on the feasibility of a UAV flying in GNSS-denied, low-light, and degeneracy tunnel-like environments. However, to the best of our knowledge, there is no public dataset for such environments. Therefore, we conduct experiments and compare them with state-of-the-art SLAM in the real world, rather than relying on open-source datasets. First, we verify the feasibility of the data fusion algorithm in an indoor manual switching experiment. Following this, failure detection experiments are used to evaluate the responsiveness and feasibility of our odometry switching algorithm. Finally, we evaluate the performance of LOFF and compare it with state-of-the-art SLAM algorithms in a long tunnel flight experiment.
4.1. Indoor Manual Switching Experiment
This experiment aims to demonstrate the continuity and smoothness of the trajectory estimated by our approach. To evaluate the performance of our approach under repeated switching between the use of LiDAR odometry only and fused with optical flow odometry, we manually trigger failure and recovery signals (the detectors in Section 3.3 were not operational in this experiment). The trajectory of this experiment is shown in Figure 8. We switch to fused odometry during the straight-line flight of the UAV, whereas, during turns, we rely solely on LiDAR odometry. Although the orientation of the UAV changes several times, our direction-separated data fusion (Section 3.2) can still fuse two odometry instances in the correct direction, and the fused odometry maintains a consistent motion trend with the ground truth. Due to the error accumulation effect and the featureless environment, the accuracy of optical flow odometry is lower than that of LiDAR odometry, which is similar to the ground truth in this experiment. Nevertheless, there are no abrupt changes in the position estimate during the switching process (see in Figure 8) owing to the offset recorder (as detailed in Section 3.2).
4.2. Failure Detection Experiment
To evaluate the performance of the odometry switching algorithm (as detailed in Section 3.3) in a tunnel-like LiDAR degeneracy environment, an experiment was conducted with the UAV in a tunnel that was over 200 m long and free of obstacles. In this experiment, the UAV took off from one end of the tunnel, maneuvered manually using our odometry as a position estimate throughout the flight, and eventually landed after entering the LiDAR degeneracy environment. As shown in Figure 9, our approach uses LiDAR odometry only because the LiDAR can receive the point cloud of the wall, located at the end of the tunnel, and the position estimate was accurate during this situation. However, as the UAV flew further, the point cloud of the wall became sparser, increasing the likelihood of LiDAR odometry position estimation failures in the degenerate direction. At 83.15 s, the failure detector detected an instability in the LiDAR odometry, prompting our approach to switch to fused optical flow odometry and LiDAR odometry. This switch helped to avoid a sudden increase in the position estimate between 83 and 85 s and subsequent fluctuations after 105 s.
4.3. Tunnel Flight Experiment
To comprehensively assess the performance of our odometry and compare it with the state-of-the-art LiDAR SLAM and visual SLAM techniques, we conducted experiments in a real-world long straight pedestrian tunnel located within a hydropower station (in Figure 10B). Due to the lack of a motion capture system or real-time kinematic positioning (RTK), we used landmarks marked at one-meter intervals to obtain the ground truth data, as shown in Figure 10B. The trajectory of the UAV (the green line), positioning with our odometry, in the forward experiment is shown in Figure 10A. The UAV flew through two right-angle corners before entering a straight tunnel over 200 m in length. For the LiDAR SLAM, it flew in a general environment and then gradually transitioned to a degeneracy environment. For the visual SLAM, it flew in intervals between the general environment and the degeneracy environment as the light changed between light and dark (as shown in the lighting conditions in Figure 10A). Comparing the LiDAR SLAM DLIO [15], the LiDAR SLAM Faster-Lio [14], and the Intel RealSense T265 [36], the results of position estimation in the degeneracy direction in the forward experiment and return experiment are shown in Figure 10C,D, respectively.
In the forward experiment, the position estimates of Faster-Lio [14], DLIO [15], and our odometry were close to the ground truth in the initial 75 s. However, the position of visual SLAM [36] was estimated to fail when it first entered the dark environment (25 s). When the UAV flew into the LiDAR degeneracy environment, the position estimate of Faster-Lio [14] involved staying in place or occasionally moving forward by a small distance, because the point cloud data received by the LiDAR were almost the same in each frame. In contrast, the position estimate of DLIO [15] involved continuing to move forward with fluctuations, because it was tightly coupled to the IMU. It is worth noting that the position estimate of our odometry involved moving forward smoothly with the ground truth, because it switched to the fusion of LiDAR odometry and optical flow odometry in the degeneracy environment. At the time of landing (170 s), our odometry had an error of 5 m, compared to 14, 43, and 70 m for DLIO [15], Faster-Lio [14], and visual SLAM [36].
The return experiment was performed in the same pedestrian tunnel but in the opposite direction. Thus, LiDAR SLAM was first in the degeneracy environment and then transitioned to the general environment. Visual SLAM [36] had no position estimate for the first 65 s because it failed to initialize in the dark environment. Similarly, the position estimate of Faster-Lio [14] involved remaining immobile after the UAV operator’s point cloud disappeared and returned to normal after receiving enough point clouds of the wall (98 s). The position estimate of DLIO still involved moving forward with fluctuations until receiving the point cloud of the wall (74 s). Unlike these, our odometry smoothly followed the ground truth as it started with the fused odometry and switched to the LiDAR odometry only later. In the return experiment, the errors of our odometry, DLIO [15], Faster-Lio [14], and visual SLAM [36] were 3, 6, 49, and 60 m, respectively. In these experiments, our odometry outperformed all other algorithms that were compared. Moreover, through the trajectory (in Figure 10A) and the actual flight, it is also proven that our odometry has good performance in straight tunnels and those with corners in the tunnel-like LiDAR degeneracy environment.
5. Conclusions
To enable UAVs to achieve position estimation and stable flight in tunnel-like degeneracy environments, this work presents a LiDAR and optical flow fusion approach (LOFF). The key innovation is that our approach uses a direction-separated data fusion method to compensate for the position estimation of LIDAR SLAM in the degenerate direction with optical flow odometry. In addition, equipped with our detectors and offset recorder, UAVs can stably explore and fly in tunnel-like GNSS-denied environments that include both general and degeneracy environments. We demonstrate the reliability of our approach through real-world flight experiments.
In future work, we are interested in replacing optical flow odometry with visual SLAM, offering greater available information. By harnessing the synergies between visual and LiDAR data, we anticipate achieving enhanced position estimation capabilities, particularly in more intricate environments.
Conceptualization, methodology, validation, J.Z., Z.H. and X.Z.; writing—review and editing, J.Z., F.G. and C.S.; investigation, Q.Z.; project administration, funding acquisition, R.S. All authors have read and agreed to the published version of the manuscript.
The source code of this research is available at
The authors declare no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. A GNSS-denied long tunnel environment. (A,C) represent degenerate environments for LiDAR SLAM. (B) depicts a general environment with corners and stairs.
Figure 4. The normal vector of the wall of the tunnel is calculated from the point cloud within the detection box. The handing deviation angle can be obtained by calculating the yaw of the UAV with the normal vector of the wall.
Figure 5. (A) presents the position estimate of LiDAR SLAM in a tunnel-like degeneracy environment. The UAV hovered without any movement, while the trajectory of the position estimate (illustrated by the yellow line) oscillated along the axial direction of the tunnel. (B) shows a photograph of this environment.
Figure 6. Illustration of the optical flow odometry projection in the degeneracy direction. The blue line, red vector, and green vector represent the flight trajectory of the UAV, the increment in optical flow odometry, and the projection, respectively.
Figure 7. Illustration of optical flow odometry. By combining the distance between the sensor and the wall, the motion distance on the normalized image plane can be restored to the real-world scale.
Figure 8. The indoor manual switching experiment. (A) displays LiDAR point clouds and trajectories. The white and green lines depict the ground truth and our odometry trajectories, respectively. The red, blue, yellow, and grey lines represent the trajectories obtained by fusing LiDAR odometry and optical flow odometry. (B,C) represent the position estimates along the X-axis and Y-axis of the world frame, respectively. The blue bar indicates the fusion of LiDAR and optical flow odometry, while the white bar uses LiDAR odometry alone. The trajectory of position estimation is smooth during switching.
Figure 8. The indoor manual switching experiment. (A) displays LiDAR point clouds and trajectories. The white and green lines depict the ground truth and our odometry trajectories, respectively. The red, blue, yellow, and grey lines represent the trajectories obtained by fusing LiDAR odometry and optical flow odometry. (B,C) represent the position estimates along the X-axis and Y-axis of the world frame, respectively. The blue bar indicates the fusion of LiDAR and optical flow odometry, while the white bar uses LiDAR odometry alone. The trajectory of position estimation is smooth during switching.
Figure 9. The failure detection experiments. The failure detector effectively intervenes in time to prevent mutations in the position estimation.
Figure 10. Tunnel flight experiments. (A) displays the point clouds and trajectories of these experiments. (B) shows the pre-established landmarks that served as ground truth data for these experiments. (C,D) represent the results of the forward experiment and the return experiment, respectively.
References
1. Ghamari, M.; Rangel, P.; Mehrubeoglu, M.; Tewolde, G.S.; Sherratt, R.S. Unmanned aerial vehicle communications for civil applications: A review. IEEE Access; 2022; 10, pp. 102492-102531. [DOI: https://dx.doi.org/10.1109/ACCESS.2022.3208571]
2. Kim, I.H.; Jeon, H.; Baek, S.C.; Hong, W.H.; Jung, H.J. Application of crack identification techniques for an aging concrete bridge inspection using an unmanned aerial vehicle. Sensors; 2018; 18, 1881. [DOI: https://dx.doi.org/10.3390/s18061881] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29890652]
3. Mejía, A.; Marcillo, D.; Guaño, M.; Gualotuña, T. Serverless based control and monitoring for search and rescue robots. Proceedings of the 2020 15th Iberian Conference on Information Systems and Technologies (CISTI); Seville, Spain, 27 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1-6.
4. Huang, D.; Chen, J.; Chen, Y.; Hang, S.; Sun, C.; Zhang, J.; Zhan, Q.; Shen, R. Monocular Visual Measurement Based on Marking Points Regression and Semantic Information. Proceedings of the 2023 China Automation Congress (CAC); Chongqing, China, 17–19 November 2023; pp. 4644-4649. [DOI: https://dx.doi.org/10.1109/CAC59555.2023.10451861]
5. Shahmoradi, J.; Mirzaeinia, A.; Roghanchi, P.; Hassanalian, M. Monitoring of inaccessible areas in gps-denied underground mines using a fully autonomous encased safety inspection drone. Proceedings of the AIAA Scitech 2020 Forum; Orlando, FL, USA, 6–10 January 2020; 1961.
6. Zhang, S.; Wang, W.; Jiang, T. Wi-Fi-inertial indoor pose estimation for microaerial vehicles. IEEE Trans. Ind. Electron.; 2020; 68, pp. 4331-4340. [DOI: https://dx.doi.org/10.1109/TIE.2020.2984457]
7. Zhang, S.; Wang, W.; Zhang, N.; Jiang, T. LoRa backscatter assisted state estimator for micro aerial vehicles with online initialization. IEEE Trans. Mob. Comput.; 2021; 21, pp. 4038-4050. [DOI: https://dx.doi.org/10.1109/TMC.2021.3063850]
8. Rudol, P.; Wzorek, M.; Conte, G.; Doherty, P. Micro unmanned aerial vehicle visual servoing for cooperative indoor exploration. Proceedings of the 2008 IEEE Aerospace Conference; Big Sky, MT, USA, 1–8 March 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1-10.
9. Premachandra, H.; Liu, R.; Yuen, C.; Tan, U.X. UWB Radar SLAM: An Anchorless Approach in Vision Denied Indoor Environments. IEEE Robot. Autom. Lett.; 2023; 8, pp. 5299-5306. [DOI: https://dx.doi.org/10.1109/LRA.2023.3293354]
10. Wang, Z.; Liu, S.; Chen, G.; Dong, W. Robust visual positioning of the UAV for the under bridge inspection with a ground guided vehicle. IEEE Trans. Instrum. Meas.; 2021; 71, 5000610. [DOI: https://dx.doi.org/10.1109/TIM.2021.3135544]
11. Chen, J.; Chen, Y.; Huang, D.; Hang, S.; Zhang, J.; Sun, C.; Shen, R. Relative Localization of Vehicle-to-Drone Coordination Based on Lidar. Proceedings of the International Conference on Autonomous Unmanned Systems; Nanjing, China, 8–11 September 2023; Springer: Berlin/Heidelberg, Germany, 2023.
12. Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.; Tardós, J.D. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam. IEEE Trans. Robot.; 2021; 37, pp. 1874-1890. [DOI: https://dx.doi.org/10.1109/TRO.2021.3075644]
13. Qin, T.; Cao, S.; Pan, J.; Shen, S. A General Optimization-based Framework for Global Pose Estimation with Multiple Sensors. arXiv; 2019; arXiv: 1901.03642
14. Bai, C.; Xiao, T.; Chen, Y.; Wang, H.; Zhang, F.; Gao, X. Faster-LIO: Lightweight Tightly Coupled Lidar-Inertial Odometry Using Parallel Sparse Incremental Voxels. IEEE Robot. Autom. Lett.; 2022; 7, pp. 4861-4868. [DOI: https://dx.doi.org/10.1109/LRA.2022.3152830]
15. Chen, K.; Nemiroff, R.; Lopez, B.T. Direct lidar-inertial odometry: Lightweight lio with continuous-time motion correction. Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA); London, UK, 29 May–2 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 3983-3989.
16. Liang, J.; Qiao, Y.L.; Guan, T.; Manocha, D. OF-VO: Efficient navigation among pedestrians using commodity sensors. IEEE Robot. Autom. Lett.; 2021; 6, pp. 6148-6155. [DOI: https://dx.doi.org/10.1109/LRA.2021.3090660]
17. Jiang, C.; Wang, G.; Miao, Y.; Wang, H. 3D scene flow estimation on pseudo-lidar: Bridging the gap on estimating point motion. IEEE Trans. Ind. Inform.; 2022; 19, pp. 7346-7354. [DOI: https://dx.doi.org/10.1109/TII.2022.3210560]
18. Liu, H.; Liao, K.; Lin, C.; Zhao, Y.; Guo, Y. Pseudo-lidar point cloud interpolation based on 3d motion representation and spatial supervision. IEEE Trans. Intell. Transp. Syst.; 2021; 23, pp. 6379-6389. [DOI: https://dx.doi.org/10.1109/TITS.2021.3056048]
19. Rashed, H.; Ramzy, M.; Vaquero, V.; El Sallab, A.; Sistu, G.; Yogamani, S. Fusemodnet: Real-time camera and lidar based moving object detection for robust low-light autonomous driving. Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops; Seoul, Republic of Korea, 27 October–2 November 2019; pp. 2393-2402.
20. Pandya, A.; Jha, A.; Cenkeramaddi, L.R. A velocity estimation technique for a monocular camera using mmwave fmcw radars. Electronics; 2021; 10, 2397. [DOI: https://dx.doi.org/10.3390/electronics10192397]
21. Zheng, W.; Xiao, J.; Xin, T. Integrated navigation system with monocular vision and LIDAR for indoor UAVs. Proceedings of the 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA); Siem Reap, Cambodia, 18–20 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 924-929.
22. Yun, S.; Lee, Y.J.; Sung, S. Range/optical flow-aided integrated navigation system in a strapdown sensor configuration. Int. J. Control. Autom. Syst.; 2016; 14, pp. 229-241. [DOI: https://dx.doi.org/10.1007/s12555-014-0336-5]
23. Du, H.; Wang, W.; Xu, C.; Xiao, R.; Sun, C. Real-time onboard 3D state estimation of an unmanned aerial vehicle in multi-environments using multi-sensor data fusion. Sensors; 2020; 20, 919. [DOI: https://dx.doi.org/10.3390/s20030919] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32050470]
24. Zhen, W.; Scherer, S. Estimating the localizability in tunnel-like environments using LiDAR and UWB. Proceedings of the 2019 International Conference on Robotics and Automation (ICRA); Montreal, QC, Canada, 20–24 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 4903-4908.
25. Kim, K.; Im, J.; Jee, G. Tunnel facility based vehicle localization in highway tunnel using 3D LIDAR. IEEE Trans. Intell. Transp. Syst.; 2022; 23, pp. 17575-17583. [DOI: https://dx.doi.org/10.1109/TITS.2022.3160235]
26. Zhang, J.; Kaess, M.; Singh, S. On degeneracy of optimization-based state estimation problems. Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA); Stockholm, Sweden, 16–21 May 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 809-816.
27. Hinduja, A.; Ho, B.J.; Kaess, M. Degeneracy-aware factors with applications to underwater slam. Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); Macau, China, 3–8 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1293-1299.
28. Tuna, T.; Nubert, J.; Nava, Y.; Khattak, S.; Hutter, M. X-icp: Localizability-aware lidar registration for robust localization in extreme environments. IEEE Trans. Robot.; 2023; 40, pp. 452-471. [DOI: https://dx.doi.org/10.1109/TRO.2023.3335691]
29. Leondes, C.T. Theory and Applications of Kalman Filtering; North Atlantic Treaty Organization, Advisory Group for Aerospace Research: Neuilly sur Seine, France, 1970; Volume 139.
30. Djuric, P.M.; Kotecha, J.H.; Zhang, J.; Huang, Y.; Ghirmai, T.; Bugallo, M.F.; Miguez, J. Particle filtering. IEEE Signal Process. Mag.; 2003; 20, pp. 19-38. [DOI: https://dx.doi.org/10.1109/MSP.2003.1236770]
31. Rusu, R.B.; Cousins, S. 3D is here: Point Cloud Library (PCL). Proceedings of the IEEE International Conference on Robotics and Automation (ICRA); Shanghai, China, 9–13 May 2011.
32. Trawny, N.; Roumeliotis, S.I. Indirect Kalman Filter for 3D Attitude Estimation; Technical Report 2005-002 Department of Computer Science & Engineering, University of Minnesota: Minneapolis, MN, USA, 2005.
33. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. Proceedings of the Sensor Fusion IV: Control Paradigms and Data Structures; Boston, MA, USA, 14–15 November 1991; SPIE: Bellingham, WA, USA, 1992; Volume 1611, pp. 586-606.
34. Segal, A.; Haehnel, D.; Thrun, S. Generalized-icp. Proceedings of the Robotics: Science and systems; Seattle, WA, USA, 28 June–1 July 2009; Volume 2, 435.
35. Liu, X.; Zhou, Q.; Chen, X.; Fan, L.; Cheng, C.T. Bias-error accumulation analysis for inertial navigation methods. IEEE Signal Process. Lett.; 2021; 29, pp. 299-303. [DOI: https://dx.doi.org/10.1109/LSP.2021.3129151]
36. Corporation, Intel. Intel RealSense Tracking Camera T265. 2019; Available online: https://www.intelrealsense.com/tracking-camera-t265/ (accessed on 30 April 2024).
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Simultaneous Location and Mapping (SLAM) is a common algorithm for position estimation in GNSS-denied environments. However, the high structural consistency and low lighting conditions in tunnel environments pose challenges for traditional visual SLAM and LiDAR SLAM. To this end, this paper presents LiDAR and optical flow fusion odometry (LOFF), which uses a direction-separated data fusion method to fuse optical flow odometry into the degenerate direction of the LiDAR SLAM without sacrificing the accuracy. Moreover, LOFF incorporates detectors and a compensator, allowing for a smooth transition between general environments and degeneracy environments. This capability facilitates the stable flight of unmanned aerial vehicles (UAVs) in GNSS-denied tunnel environments, including corners and long-distance consistency. Through real-world experiments conducted in a GNSS-denied pedestrian tunnel, we demonstrate the superior position accuracy and trajectory smoothness of LOFF compared to state-of-the-art visual SLAM and LiDAR SLAM.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer