1. Introduction
As a research hotspot, cooperative navigation in air–ground swarms has received much attention. Several studies have shown that air–ground swarm cooperative navigation has not only made significant progress in technical realization but has also demonstrated great potential in practical applications. In a challenging environment for global navigation satellite systems (GNSS), air–ground swarm cooperative navigation enhances formation configurations and reduces GDOP, thereby improving overall navigation accuracy [1,2,3]. For most swarm members, due to manufacturing cost and flight load limitations, only a small number of members in the swarm are equipped with high-precision inertial navigation systems. Most members rely only on low-precision inertial navigation systems and sensors for navigation. In complex terrain environments, the positioning signals of partner members and satellites are often blocked or interfered with, causing positioning errors for some members to diverge [4]. The overall positioning accuracy of swarms can be improved through the passing of information between sensors and members of swarms. Therefore, cooperation is highly important for aerial swarm navigation.
To address the problem of improving the positioning accuracy of swarm-to-be-assisted members in signal-blocking environments, many researchers have investigated cooperative positioning algorithms. In 2022, ref. [5] proposed a cooperative localization method for underwater unmanned submersibles. It established a master–slave cooperative navigation model based on the extended Kalman filter (EKF) with state consistency. The application of this model significantly enhances the synchronization and positioning accuracy of cooperative navigation for unmanned underwater submersibles. In 2017, Alessio Fascista from the University of Salento, Italy, proposed an extended Kalman filter colocalization method based on the antenna-measured angle of arrival (AOA), which combines inertial navigation and AOA signal fusion to effectively improve the co-localization accuracy of vehicles in urban environments [6]. The above co-localization method is mainly based on Kalman filtering, which constructs system state equations and measurement equations by taking member states as filtered state quantities and relative observation quantities. The method is simple in its engineering application and the algorithm runs stably. In addition, by establishing the cooperative network as a graph model and reasoning through the probabilistic graph model, swarm cooperative navigation can also be realized. In 2009, Wymeersch of MIT first proposed a distributed cooperative localization algorithm based on the sum-product algorithm for large-scale wireless network localization, laying a theoretical foundation for belief propagation cooperative localization [7]. To address the problem of cooperative navigation errors due to swarm clock errors, Meyer of the Vienna University of Technology proposed a nonparametric belief propagation cooperative localization algorithm for jointly estimating swarm positions and clock errors, which suppresses the effect of swarm clock errors on cooperative localization [8,9]. Considering real-time iterative updating of swarm members’ confidence, Fernández of Aalto University, Finland, proposed a cooperative localization method based on a posteriori linearized belief propagation, which uses the sigma point for the linearization between two nodes, thus reducing the number of dimensions and computations [10]. Hongmei Chen proposed an improved swarm cooperative localization method for the localization of collaborative UAVs in networks with GPS/INS/UWB interference, which allows normal sigma-point belief propagation in the presence of measurement interference [11].
In complex terrain environments, the accuracy of air–ground group localization is degraded by the fact that navigation and cooperative signals (e.g., satellite and radio signals) are often reflected by the nearby terrain, resulting in a non-line-of-sight (NLOS) effect. One study [12] weighted the satellite navigation positioning signals by the positioning signal carrier-to-noise ratio to reduce the weight of possible NLOS signals in the positioning solution to suppress the satellite navigation NLOS error. Barral V utilizes machine learning techniques to analyze actual UWB measurements in different scenarios, effectively separating NLOS-propagated positioning signals in UWB systems [13]. To achieve high-precision satellite positioning in urban environments, suppress the NLOS effect and improve navigation accuracy, Groves of University College London proposed comparing the relative angle between the user and the satellite with the terrain contour in 3D digital maps, i.e., analyzing the terrain cover angle to judge the satellite visibility and separate the NLOS information [14]. In order to address the NLOS problem of smartphone navigation and localization in urban environments, Hoi-Fung Ng introduced GNSS civil L5-band signals with stronger multipath resistance and used 3D terrain modelling to determine the NLOS signals [15]. Zhong Q analyzed the localization performance of different filtering algorithms combined with 3D terrain modelling NLOS suppression techniques, and the experimental results showed that filtering algorithms have a greater impact on mobile localization than on static localization and that the 3D terrain modelling NLOS suppression method has a better improvement effect on localization accuracy in dense environments [16]. Although several collaborative localization methods were proposed, further improving the localization accuracy of swarm members in signal-blocking environments remains a challenge. Therefore, in this paper, we will explore and improve the existing collaborative navigation algorithms to achieve high-precision swarm navigation in complex environments.
This paper proposes a novel NLOS recognition method based on air–ground visibility analysis to improve cooperative navigation and positioning in complex signal shielding environments. It enhances NLOS signal recognition and separation accuracy, thereby boosting overall navigation performance. It introduces a collaborative navigation scheme using relative visibility analysis in Section 2, details a visibility analysis method for NLOS signal separation based on LOS conditions intersection and 3D terrain modelling in Section 3, and presents a cooperative localization method with heterogeneous ranging and angular observations in Section 4. Section 3 and Section 4 are part of the method section. Section 5 validates the effectiveness of these methods through simulation experiments, demonstrating significant improvements in navigation and localization accuracy, particularly in challenging terrain conditions, this section includes data and results. Section 6 and Section 7 contain the discussion and conclusion.
Unlike the traditional consideration of GNSS satellite visibility only, i.e., analyzing GNSS visibility based on the masking angle, this paper investigates visibility in the context of swarm collaboration. The contribution of this paper lies in the fact that we further consider the effect of visibility after the introduction of collaboration among swarm members, establish a collaborative navigation graph model for the air–ground swarm by linearizing the ranging and angular observation data among the group members, and solve the collaborative localization problem through belief propagation. The proposed NLOS identification method based on visible vector masking visibility analysis reduces the influence of NLOS relative measurements through the combination of octree terrain data structure and LOS/3D terrain modelling cross-detection method, which is helpful to the improvement of swarm localization in urban environments.
2. Cooperative Navigation Scheme with Relative Visibility Analysis
Considering that the NLOS error of swarm cooperative navigation increases significantly in complex terrain environments for air–ground swarms and that the traditional NLOS error suppression method based on the pure angle of 3D terrain modellings mistakenly deletes normal propagation signals at near distances, a cooperative navigation NLOS signal identification scheme based on the analysis of air–ground visibility was designed. The air–ground swarm LOS condition is also introduced to more accurately separate NLOS signals during co-navigation and to utilize ranging and angular observations for co-navigation through the belief propagation colocalization method, as shown in Figure 1.
In air–ground swarming, the member to be assisted (AM) first reads the octree 3D terrain in its neighborhood through the rough position of its own inertial navigation output and calculates the range of angles and distances in which it can be seen through the ray/octree intersection detection method. Then, the reference member (RM) obtains its own accurate positioning information through its own GNSS receiver and provides relative measurement information and its own reference position information to the AM through the data link. After obtaining the position and relative measurement information of the RM, the AM will solve the relative distance and angle information with the RM. Subsequently, the relative distance and angle information of the RM is compared with the previously calculated visible range of AM. If the visibility judgment condition is satisfied, the relative positioning data of the RM are retained; otherwise, the RM localization signal is an NLOS signal and should be separated. Finally, the belief propagation co-navigation and localization solution of the to-be-assisted members is performed using the colocalization data of the RMs after performing NLOS separation, and the colocalization results after suppressing the NLOS errors are obtained.
3. Air–Ground NLOS Identification Method Aided by 3D Terrain Modelling
3.1. Air–Ground Member-Ray Terrain Intersection Detection Methods
To suppress the NLOS effect and improve the positioning accuracy in GNSS, the relative orientation and elevation angle of the user and the satellite can be compared with the terrain contour in the 3D digital map, i.e., analyzing the terrain cover angle to judge the satellite visibility and separating the NLOS information [15,16,17]. If the satellite is visible to the user, the signal is an LOS signal; otherwise, it is an NLOS signal with NLOS errors and should be separated. As shown in Figure 2, the user receives signals from both Satellite 1 and Satellite 2 in the direction of the azimuthal angle of the user’s location; Satellite 1’s elevation angle is larger than the terrain cover angle , thus Satellite 1 is assumed to be visible and the signal is not affected by NLOS effects. On the other hand, the elevation angle of Satellite 2 is lower than the terrain cover angle , thus Satellite 2 is assumed to be invisible and its signal should be excluded from the position calculation.
The traditional 3D terrain modelling visibility prediction method based on angles is effective in mitigating NLOS errors in GNSS-only systems. By calculating the angle between the vehicle and the satellite signal or building, it is possible to identify and separate NLOS-affected signals, thus improving the accuracy and reliability of positioning. However, in cooperative navigation systems, the GNSS-UAV cooperative navigation system can be used as an example. The nearby LOS signal could be mistakenly removed, as shown in Figure 3. When the anchor UAV is between the user and high terrain, although the user and UAV are visible to each other, since the UAV’s elevation angle is lower than the terrain cover angle, the visibility prediction algorithm mistakenly assumes that the UAV is invisible and excludes its signal. This mistake weakens the cooperative position geometry and accuracy. Compared with the traditional angle-based method, the visible vector masking method combined with the 3D terrain has different intersections with the building terrain at different distances for the same angle, and other intersections can be effectively ruled out after fixing the distance, and the NLOS can be distinguished using the traditional angle-based method by focusing only on the building’s silhouette.
In complex terrain environments, NLOS signals are propagated by terrain reflection, and normal LOS signals are propagated along straight lines. The NLOS signals can be identified and separated by determining whether the light vector between the cooperative navigation signal source and the navigation signal receiver intersects the octree 3D terrain modelling.
LOS intersection detection (ray-tracing) algorithms are employed to find the earliest target hit by a given vector of light rays passing through a scene consisting of polygons or other objects while tracking that ray. Ray tracing also has important applications in the field of wireless communications to generate realistic channel data. Ray-tracing simulators can be used to evaluate beamforming techniques to optimize the directionality and coverage of antenna arrays by simulating the propagation paths of signals in space [18]. Ray tracing simulates the propagation paths of wireless signals in complex environments, thus improving the accuracy of positioning techniques, especially in environments with significant multipath effects.
Suppressing NLOS errors by constructing 3D digital terrains and predicting the visibility of GNSS satellites or cooperative navigation information sources in complex terrain environments was the subject of many studies [19,20,21,22,23]. The octree structure is a tree data structure based on recursive partitioning, which is mainly used for searching and storing in 3D space [24,25,26,27]. The octree data structure and the process of dividing the child nodes are shown schematically in Figure 4.
First, to initialize the octree, in this paper, the maximum depth was set to 16, and there are at most voxel structures. The coordinates of the voxels in the octree can be queried based on the point cloud 3D coordinates [20]. Within the topography of the octree, the voxel generally has three states: occupied, free and unknown. By dividing the octree voxels into occupied and free cases, the requirements for an NLOS identification method for air–ground swarm cooperative navigation can be met. This paper is not concerned with the application of an unknown state to the octree.
The definition of light rays is given by the following equation [28]:
(1)
where is the starting point of the ray, is the unit vector representing the direction of the ray, is the length of the ray, and is the end point of the ray. As an example, the ray detection algorithm for all cases are described in the following section for two-dimensional quadtree intersection detection. The algorithm is also applicable to all other cases. The intersection of ray/octree maps shown n Figure 5.If , to determine the order of the child nodes that should be traversed, it is necessary to determine which child node the ray hits first. is the distance from the starting point of the ray to the intersection of the horizontal and vertical axes of the node boundary, is the length of the ray, the first comparison between and determines whether the ray intersects with q1 or q2 of the child node, and the second comparison between determines whether the ray intersects with q0 or q3, thus obtaining a judgment of which child node of the node the ray intersected with first; the process of the judgment is shown in Figure 6 and Figure 7:
In 3D terrain modelling octree ray intersection detection, a single node contains eight child nodes, for which the , and parameters on the z-axis need to be calculated. Therefore, intersection detection requires more judgment but the detection principle is like that of 2D intersection detection [29].
Through the above method, if the ray intersects the 3D octree map, the intersection points of the light ray and the 3D octree map as well as the length of the light ray can be obtained, which lays the foundation for the cooperative navigation NLOS error suppression method based on air–ground visibility analysis in this article.
3.2. Air–Ground NLOS Identification Based on Visible Vector Masking
After obtaining the approximate position of the user, the elevation angle and azimuth of each signal transmitter are calculated.
As shown in Figure 8, the Cartesian coordinates of the user in the ECEF coordinate frame are , and the Cartesian coordinates of the signal transmitter in the ECEF coordinate frame are ; therefore, the relative distance, azimuth and elevation angle of the signal transmitter to the user are calculated.
If we take all the 3D terrain modelling data into consideration, mass computation will slow down the program. To improve the simulation speed, this article calculates only terrain that is less than 300 m away from the user. For terrains greater than 300 m away from the user, the cover angle is very low. For example, a building that is 50 m tall only covers users for less than 10°.
The terrain cover range is calculated after selecting the octree terrain near AM. Taking the position of the user as the origin, the surrounding azimuth angles are sampled. At each sampled azimuth , the terrain cover limit including the terrain cover angle and the maximum visible distance are calculated. At azimuth of the signal transmitter, the visibility is predicted according to the following equation:
(2)
The transmitter elevation angle and the terrain angle are then compared. If the transmitter elevation angle is larger than the terrain cover angle , then the user and transmitter are visible to each other. If the transmitter elevation angle is lower than the terrain cover angle , compare the visible distance with the relative distance . If , the signal transmitter is visible. If , the transmitter and user are not visible to each other. The signal received is NLOS signal and should be excluded.
By this method, we can exclude NLOS signals more precisely and obtain high positioning accuracy.
4. Enhanced Collaborative Localization Based on Belief Propagation
4.1. Edge Probability Model for Air–Ground Swarm Cooperative Navigation
An applicable scenario of the algorithm proposed in this article is shown in Figure 9. An air–ground swarm consists of members such as unmanned aerial vehicles and unmanned vehicles. Due to electromagnetic interference, i.e., swarm members carrying different sensors, etc., there is a relative measurement of heterogeneous ranging and ranging/angle information among members.
The air–ground swarm belief propagation cooperative navigation graph is first modeled to approximate the air–ground swarm member locations using a normal distribution. According to the coordinates of the th member , the position covariance of the th member is estimated to be . The prior probability distribution function of the position of the th member is as follows:
(3)
where is the mean value of the position of the th member and is the normal distribution in probability statistics.In the belief propagation model, variable nodes are used to represent the belief of the position of the th member, as shown in Figure 9. Swarm members transmit their own position beliefs through hybrid observation factors and correct the positions of adjacent members.
is used to denote the relative heterogeneous measurements of the th and th members, which can be either ranging or ranging angular measurements. It can be thought of as consisting of a function associated with the position of the th, th member and Gaussian noise, i.e.,
(4)
where is Gaussian-distributed noise with mean 0 and variance matrix .To estimate the mean and variance of the members’ position, the relative heterogeneous measurement data are approximated as the affine function associated with , with Gaussian noise, as shown in the following equation:
(5)
where , and are variables subject to a Gaussian distribution whose mean is zero and covariance matrix is .To establish a hybrid observation factor , it is necessary to linearize the relative observation information between swarm members. Taking the relative observation data of the th and th members as an example, since usually contains nonlinear errors, the linear regression is decomposed by the sigma-point method [30]. Select sigma sampling points and weights by unscented transformation, where the sampling points . m sigma sampling points are calculated according to the type of relative observation data
(6)
where , and are the azimuth and elevation angle, respectively, calculated by sampling position . After the calculation of m sigma sampling points is complete, the mean and covariance of the relative observation information can be obtained.(7)
(8)
(9)
Thus, the linearization parameters of the relative observation information are calculated by:
(10)
(11)
(12)
Then, the relative observation information can be decomposed into:
(13)
After repeating the above steps until all relative observations are linearized, the hybrid observation factor μ is computed in the belief propagation model. If the ith member has not received any hybrid observation factor for a member other than j, directly establish the hybrid observation factor from the ith to jth members in the belief propagation model
(14)
where , , , and is the relative observation noise covariance.If the ith member has received hybrid observation factors of other members except j, first calculate the optimized position estimate of the ith member based on the hybrid observation factor
(15)
The calculation of the above formula can be obtained by Kalman filtering updates. Let , , and the formula of filtering update is:
Let ;
Filter gain ;
State estimation equation ;
Estimating the mean square error equation ;
4.2. Belief Propagation for Heterogeneous Information
In this section, a belief-propagation algorithm is used to realize air–ground member position estimation and relative heterogeneous measurement information fusion to obtain more accurate positions of air–ground swarms.
Specifically, each vehicle first collects position and navigation-related data using its own onboard sensors. For each sensor’s measurement data, its marginal probability distribution, i.e., the probability distribution of the position or navigation information obtained from that sensor’s measurement alone, is calculated. Belief probabilities are then computed by considering measurements from other vehicles, which indicate the probability of updating the local sensor measurements after considering other relevant information. For the th member, according to the hybrid observation factors it receives, its optimized position estimates and are obtained by the following formula:
(16)
Letting and , the above formula can be obtained by updating with the Kalman filter times. Its observation formula is:
(17)
Thus, more accurate position estimates and of the th member are updated.
4.3. Interactive Cooperative Localization Error Correction
According to the position of the swarm member optimized by belief propagation, the swarm cooperative observation information is corrected. If the relative observation information is transmitted between the th and th members, the joint posterior probability distribution function of their positions is:
(18)
In the above formula, the mean value and covariance of the joint posterior probability estimation of the th member can be obtained by performing a Kalman filter update on the estimation of the th member’s positions and .
First, let and . Then, the linearization parameters , , , relative observation information , and relative observation noise covariance are used to update the Kalman filter, and the observation and state estimation formulas are:
(19)
Filter gain:
(20)
State estimation equation:
(21)
The optimized joint position estimation of the th member is used to perform linearization decomposition of the observation information and update the linearization parameters of the relative observation information. By means of the belief-propagation algorithm, the marginal probabilities of the local sensors and the conditional probabilities are combined to generate a comprehensive posterior probability distribution. This posterior probability distribution provides an optimal estimate of the position and navigation state, considering the weights and contributions of all available information. Belief propagation is therefore able to integrate data from multiple sources and reduce the effect of single sensor errors, thus improving the overall system positioning and navigation accuracy.
5. Validation Results and Analysis
5.1. Air–Ground NLOS Identification Based on Ray Intersection Detection
To validate the algorithm proposed in this paper, small octree terrains containing domed buildings and C-shaped buildings were built for simulation validation of air–ground swarm NLOS identification based on LOS intersection detection. Figure 10 shows static scene 1, which contains a domed building, a navigation satellite and two UAVs.
Figure 11 shows static scene 2, which contains the C-shaped building and two UAVs.
In fact, the visibility range in the algorithm is variable and can be changed accordingly, to simplify the calculation, the search range of LOS intersection detection was set to 300 m, which is used to analyze visibility and the range of interactions between swarms. The visible distance of the unmanned vehicle was calculated according to the octree terrain model terrain data. The visible distance of the unmanned vehicle by 3D terrain in scenes 1 and 2 is shown in Figure 12 and Figure 13, respectively.
Pseudocolor radar plots are used in Figure 12 and Figure 13 to represent the visible distance of the unmanned vehicle in all directions. The blue color indicating the visible distance in the plots fades as the distance from the terrain to the unmanned vehicle in different directions increases. The positions of the navigation satellites and air–ground members relative to the unmanned vehicle are labeled Figure 12 and Figure 13, respectively. If the traditional pure angular visibility judgment method is used to separate NLOS signals by comparing the elevation angle and cover angle, the LOS signals of vehicle 2 in position 1 of scenes 1 and 2 will be separated incorrectly. However, if the air–ground visibility analysis method proposed in this paper is used to separate the NLOS signals, due to the comparison between the visible distance and the relative distance, the NLOS signals in scenes 1 and 2 can be identified and separated, and the LOS signals of vehicle 2 in position 1 of scenes 1 and 2 can be retained. This experiment proves the effectiveness of the proposed cooperative navigation NLOS error suppression method based on air–ground visibility analysis in complex environments, which can more accurately separate the NLOS signals of air–ground swarms than the traditional angle-only method to avoid erroneous deletion and help air–ground swarms efficiently utilize positioning signals in complex terrain environments.
5.2. Performance Analysis of Air–Ground Swarms’ Cooperative Localization
To verify the algorithm proposed in this paper, simulation experiments were conducted to establish a small 3D city model, as shown in Figure 14. The octree map used in the simulation is open-source and the ray-tracing algorithm is self-programmed according to the ray–terrain intersection detection method in Section 3.2 whose judgement method is in Figure 6 and Figure 7. The 3D terrain model is built by defining occupancy value in an octree structure, which simulates a typical urban environment. The model has the characteristics of an urban canyon, which creates a challenging environment for GNSS for an unmanned swarm. Based on the broadcast ephemeris file, this paper simulated the GPS constellation for a total duration of 800 s, other constellations can also be added in the case of multi-constellation applications. The air–ground swarms are equipped with an inertial guidance system, a satellite navigation system, an ultrawideband ranging system and a visual goniometric system, of which the inertial guidance system has a gyroscopic constant drift of 0.1 degree/h, the satellite navigation system has a pseudo-range mean-squared error of 5 m, the ultrawideband ranging system has a pseudo-range mean-squared error of 1 m, and the visual goniometric system has a goniometric mean-squared error of 0.1 degrees. Unmanned vehicle UGV1 moves at a speed of 0.3 m per second, and it is set to be surrounded by two reference unmanned vehicles numbered 2 and 3 and three reference vehicles numbered 4, 5, and 6, which perform belief propagation cooperative navigation-assisted localization of UGV1 and improve the localization accuracy of UGV1 when there are fewer available GPS satellites or when it is covered by terrain. The initial positions and motion characteristics of the unmanned vehicle are shown in Figure 14. Figure 15, Figure 16 and Figure 17 show the elevation angle, azimuth and relative distance of each air–ground swarm member relative to unmanned vehicle 1, respectively.
In the simulation process, the change curve of the number of visible satellites is shown in Figure 18. The number of visible satellites for unmanned vehicle 1 sometimes is less than 4, and satellite navigation is limited. Figure 19 shows the number of visible comembers of unmanned vehicle 1 under the traditional pure-angle visibility judgment method and the visibility judgment method based on LOS conditions used in this paper. As shown in the figure, the number of visible cooperative members of the traditional angle-only method is less than that of the proposed method, which suggests that the traditional angle-only method may misidentify the visible signals of nearby cooperative members as nonvisible and separate them. To further verify and analyze the visibility of unmanned vehicle 1, radar maps of the surrounding 3D environment and visibility range of unmanned vehicle 1 at simulation times of 450 were generated, as shown in Figure 20.
Figure 18 and Figure 20 show that at 450 s of simulation, UGV1 has one visible comember, UGV3, which is judged correctly by the visibility analysis method in this paper, but UGV3 is within the terrain cover range, which leads to its mistaken separation by the traditional pure angle visibility analysis method. The visible distance, true distance, and visible judgment moments of UGV2 and UGV3 relative to UGV1 during the simulation are shown in Figure 21 and Figure 22. The localization errors of unmanned vehicle 1 in a complex terrain environment under different cooperative methods are shown in Figure 23.
First, we study the case of a small number of reference members, where only one reference member unmanned vehicle 2 supports unmanned vehicle 1. Figure 23 shows the position error of UGV1 among different methods. The root-mean-square error of localization is 13.15 m before cooperation, 9.96 m for the method without identifying NLOS signals, 13.41 m for the angle-based method, and 8.12 m for the range/angle-based method. The cooperative localization method without NLOS identification improves the localization accuracy by 24.2%, the traditional angle-based NLOS identification method by −2.0%, and the range/angle-based method by 38.3%. Unmanned vehicle 1’s localization error diverges at 400–480 s and 700–780 s of simulation with less than four satellites (satellite navigation limited) before co-navigation. This is because small robots are usually equipped with low-cost MEMS, whose errors will diverge without reference information. If UGV2 performs cooperative navigation for UGV 1 without NLOS identification, UGV 1 suffers NLOS errors during 20–80, 150–280 and 370–500 s. When the traditional angle-only NLOS identification method is used to co-navigate unmanned vehicle 1, unmanned vehicle 1 is not affected by NLOS error; however, because the traditional angle-only method mistakenly separates the LOS signals of UGV2, the localization error of unmanned vehicle 1 still diverges during 500–530 and 700–800 s, causing no improvement under cooperative navigation. If the proposed NLOS identification method is used for the cooperative navigation of unmanned vehicle 1, the NLOS error of unmanned vehicle 1 is suppressed, the LOS signal of the visible member is preserved, and the localization accuracy of the unmanned vehicle 1 is higher during 500–530 and 700–800 s.
Then, we study the case of five reference members supporting UGV1. Figure 24 shows that the root-mean-square error of localization varies among the different methods, which is 13.15 m before cooperation, 9.60 m for the method without identifying NLOS signals, 3.69 m for the angle-based method, and 0.96 m for the range/angle-based method. The cooperative localization method without NLOS identification improves the localization accuracy by 27.0%, the angle-based method by 71.9%, and the range/angle-based method by 92.7%. Combining Figure 19 and Figure 23, unmanned vehicle 1’s inertial navigation system localization error diverges at 400–480 s and 700–780 s of simulation with less than four satellites (satellite navigation limited) before co-navigation, and the localization error is the largest among the four methods. If the air–ground swarm performs cooperative navigation for UGV 1 but does not perform NLOS signal identification, UGV 1 is subject to NLOS errors, and its position error will fluctuate several times. When the traditional angle-only NLOS identification method is used to co-navigate unmanned vehicle 1, unmanned vehicle 1 is not affected by NLOS error; however, because the traditional angle-only method mistakenly separates the navigation signals of the visible member, localization error of unmanned vehicle 1 still occurs in the simulation at 400–500 and 700–800 s. The inertial guidance system localization error of unmanned vehicle 1 is not affected by NLOS error. If the proposed NLOS identification method is used for the cooperative navigation of unmanned vehicle 1, the NLOS error of unmanned vehicle 1 is suppressed, the navigation signal of the visible member is preserved, and the inertial guidance system localization accuracy of unmanned vehicle 1 is higher at simulations of 400–500 s and 700–800 s.
The visibility analysis algorithm based on the LOS direction plus LOS distance proposed in this paper can improve the localization accuracy in complex terrain environments by suppressing the NLOS error under multidomain cooperative navigation conditions and can retain LOS cooperative signals mistakenly separated by traditional methods based on the cover angle of the terrain.
6. Discussion
Multi-constellation receivers are widely used nowadays, which causes the number of visible GNSS satellites to increase and effectively improves the performance of satellite positioning. However, under the urban canyon environment, the number of satellites can meet the requirement of obtaining the positioning result but the poor configuration will bring multipath effect and interference, which still brings serious challenges to the application under a complex environment. Our experimental results show that although the simulation results with one reference member in Figure 24 are not as good as those with large numbers, it is still helpful to use belief propagation cooperative positioning. Since the NLOS error will have a serious impact on localization results even causing the results to be worse than with no cooperation, it is necessary to use the method proposed in this article to identify and separate NLOS. It can be seen that after using the NLOS identification method proposed in this article, the position error of UGV1 is significantly reduced compared to no NLOS identification and without cooperation. By taking the relative range of reference members into consideration, our method has better NLOS signal identification and separation accuracy than the traditional NLOS identification method. Therefore, the method proposed in this article can effectively improve cooperative navigation and positioning in complex terrain environments. However, our research is currently limited to simulation. To build on the findings of this research, future studies will focus on implementing and testing the proposed method in real-world environments to validate the simulation results and assess performance under actual operational conditions.
7. Conclusions
This study investigated the enhancement of collaborative localization for air–ground swarms using belief propagation aided by 3D terrain modelling. The proposed method effectively addresses the challenges posed by complex terrain environments, particularly satellite signal blocking and NLOS effects. By incorporating heterogeneous ranging and angular information into the belief-propagation algorithm, the method demonstrates superior localization accuracy for air–ground swarms in complex terrains. The simulation results indicate that the proposed method effectively suppresses NLOS localization errors. The localization error curves show that the proposed method can still maintain a high localization accuracy under the condition of limited satellite visibility.
Conceptualization, R.W.; methodology, R.W. and W.Z.; software, W.Z.; validation, R.W. and W.Z.; investigation, R.W. and W.Z.; writing—original draft preparation, W.Z. and X.C.; writing—review and editing, R.W. and X.C.; visualization, W.Z.; supervision, R.W. and Z.X.; project administration, R.W.; funding acquisition, R.W. and Z.X. All authors have read and agreed to the published version of the manuscript.
The data presented in this study are available on request from the corresponding author.
The author would like to thank the anonymous reviewers for their helpful comments and valuable remarks.
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. Enhanced collaborative localization scheme based on air–ground visibility analysis.
Figure 20. Three-dimensional environment and radar map of unmanned vehicle 1 at 450 s.
References
1. Zhang, J.; Liu, R.; Yin, K.; Wang, Z.; Gui, M.; Chen, S. Intelligent Collaborative Localization Among Air-Ground Robots for Industrial Environment Perception. IEEE Trans. Ind. Electron.; 2019; 66, pp. 9673-9681. [DOI: https://dx.doi.org/10.1109/TIE.2018.2880727]
2. Zhang, M.; Li, S.; Li, B. An air-ground cooperative scheduling model considering traffic environment and helicopter performance. Comput. Ind. Eng.; 2021; 158, 107458. [DOI: https://dx.doi.org/10.1016/j.cie.2021.107458]
3. Shi, C.; Xiong, Z.; Chen, M.; Wang, R.; Xiong, J. Cooperative Navigation for Heterogeneous Air-Ground Vehicles Based on Interoperation Strategy. Remote Sens.; 2023; 15, 2006. [DOI: https://dx.doi.org/10.3390/rs15082006]
4. Wang, R.; Xiong, Z.; Liu, J. Collaborative Localization-Based Resilient Navigation Fusion. Resilient Fusion Navigation Techniques: Collaboration in Swarm; Springer Nature: Singapore, 2023; pp. 65-92.
5. Zhang, F.; Wu, X.; Ma, P. Consistent Extended Kalman Filter-Based Cooperative Localization of Multiple Autonomous Underwater Vehicles. Sensors; 2022; 22, 4563. [DOI: https://dx.doi.org/10.3390/s22124563] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35746345]
6. Fascista, A.; Ciccarese, G.; Coluccia, A.; Ricci, G. Angle of arrival-based cooperative positioning for smart vehicles. IEEE Trans. Intell. Transp. Syst.; 2017; 19, pp. 2880-2892. [DOI: https://dx.doi.org/10.1109/TITS.2017.2769488]
7. Wymeersch, H.; Lien, J.; Win, M.Z. Cooperative localization in wireless networks. Proc. IEEE; 2009; 97, pp. 427-450. [DOI: https://dx.doi.org/10.1109/JPROC.2008.2008853]
8. Meyer, F.; Etzlinger, B.; Hlawatsch, F.; Springer, A. A distributed particle-based belief propagation algorithm for cooperative simultaneous localization and synchronization. Proceedings of the 2013 Asilomar Conference on Signals, Systems and Computers; Pacific Grove, CA, USA, 3–6 November 2013; pp. 527-531.
9. Meyer, F.; Hlinka, O.; Hlawatsch, F. Sigma point belief propagation. IEEE Signal Process. Lett.; 2013; 21, pp. 145-149. [DOI: https://dx.doi.org/10.1109/LSP.2013.2290192]
10. García-Fernández, Á.F.; Svensson, L.; Särkkä, S. Cooperative localization using posterior linearization belief propagation. IEEE Trans. Veh. Technol.; 2017; 67, pp. 832-836. [DOI: https://dx.doi.org/10.1109/TVT.2017.2734683]
11. Chen, H.; Xian-Bo, W.; Liu, J.; Wang, J.; Ye, W. Collaborative Multiple UAVs Navigation With GPS/INS/UWB Jammers Using Sigma Point Belief Propagation. IEEE Access.; 2022; 8, pp. 193695-193707. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3031605]
12. Yuan, Y.; Shen, F.; Li, X. GPS multipath and NLOS mitigation for relative positioning in urban environments. Aerosp. Sci. Technol.; 2020; 107, 106315. [DOI: https://dx.doi.org/10.1016/j.ast.2020.106315]
13. Barral, V.; Escudero, C.J.; García-Naya, J.A.; Maneiro-Catoira, R. NLOS identification and mitigation using low-cost UWB devices. Sensors; 2019; 19, 3464. [DOI: https://dx.doi.org/10.3390/s19163464]
14. Groves, P.D. It’s time for 3D terrain modellingping–aided GNSS. Inside GNSS Mag.; 2016; pp. 50-56.
15. Ng, H.F.; Zhang, G.; Luo, Y.; Hsu, L.T. Urban positioning: 3D terrain modellingping-aided GNSS using dual-frequency pseudorange measurements from smartphones. Navigation; 2021; 68, pp. 727-749. [DOI: https://dx.doi.org/10.1002/navi.448]
16. Zhong, Q.; Groves, P.D. Multi-epoch 3D-mapping-aided positioning using bayesian filtering techniques. NAVIGATION J. Inst. Navig.; 2022; 69, navi.515. [DOI: https://dx.doi.org/10.33012/navi.515]
17. Saona-Vázquez, C.; Navazo, I.; Brunet, P. The visibility octree: A data structure for 3D navigation. Comput. Graph.; 1999; 23, pp. 635-643. [DOI: https://dx.doi.org/10.1016/S0097-8493(99)00087-4]
18. Chen, Y.; Li, Y.; Han, C.; Yu, Z.; Wang, G. Channel Measurement and Ray-Tracing-Statistical Hybrid Modeling for Low-Terahertz Indoor Communications. IEEE Trans. Wirel. Commun.; 2021; 20, pp. 8163-8176. [DOI: https://dx.doi.org/10.1109/TWC.2021.3090781]
19. Adjrad, M.; Groves, P.D. Intelligent Urban Positioning: Integration of Shadow Matching with 3D-Mapping-Aided GNSS Ranging. J. Navig.; 2018; 71, pp. 1-20. [DOI: https://dx.doi.org/10.1017/S0373463317000509]
20. Kubelka, V.; Dandurand, P.; Babin, P.; Giguère, P.; Pomerleau, F. Radio propagation models for differential GNSS based on dense point clouds. J. Field Robot.; 2020; 37, pp. 1347-1362. [DOI: https://dx.doi.org/10.1002/rob.21988]
21. Niijima, S.; Sasaki, Y.; Mizoguchi, H. Real-time autonomous navigation of an electric wheelchair in large-scale urban area with 3D terrain modelling. Adv. Robot.; 2019; 33, pp. 1006-1018. [DOI: https://dx.doi.org/10.1080/01691864.2019.1642240]
22. Choi, H.; Oh, J.; Chung, J.; Alexandropoulos, G.C.; Choi, J. WiThRay: A Versatile Ray-Tracing Simulator for Smart Wireless Environments. IEEE Access; 2023; 11, pp. 56822-56845. [DOI: https://dx.doi.org/10.1109/ACCESS.2023.3283610]
23. Girindran, R.; Boyd, D.S.; Rosser, J.; Vijayan, D.; Long, G.; Robinson, D. On the reliable generation of 3D city models from open data. Urban Sci.; 2020; 4, 47. [DOI: https://dx.doi.org/10.3390/urbansci4040047]
24. Haala, N.; Peter, M.; Kremer, J.; Hunter, G. Mobile LiDAR mapping for 3D point cloud collection in urban areas—A performance test. Int. Arch. Photogram. Remote Sens. Spat. Inf. Sci.; 2008; 37, pp. 1119-1127.
25. Hornung, A.; Wurm, K.M.; Bennewitz, M.; Stachniss, C.; Burgard, W. OctoMap: An efficient probabilistic 3D terrain modellingping framework based on octrees. Auton. Robot.; 2013; 34, pp. 189-206. [DOI: https://dx.doi.org/10.1007/s10514-012-9321-0]
26. Cao, Y.; Yang, S.; Feng, Z.; Wang, L.; Hanzo, L. Distributed Spatio-Temporal Information Based Cooperative 3D Positioning in GNSS-Denied Environments. IEEE Trans. Veh. Technol.; 2023; 72, pp. 1285-1290. [DOI: https://dx.doi.org/10.1109/TVT.2022.3202015]
27. Pedro, V.; Eloisa, D.; Marco, C. Morphological Operations on Unorganized Point Clouds Using Octree Graphs. ACM J. Comput. Cult. Herit. (JOCCH); 2023; 16, 12.
28. Revelles, J.; Urena, C.; Lastra, M. An Efficient Parametric Algorithm for Octree Traversal. J. WSCG; 2000; 8, pp. 212-219.
29. Huang, M.; Zhou, Y.; Wang, Y.; Liu, Z. An efficient adaptive space partitioning algorithm for electromagnetic scattering calculation of complex 3D models. J. Syst. Eng. Electron.; 2021; 32, pp. 1071-1082.
30. García-Fernández, Á.F.; Svensson, L.; Morelande, M.R.; Särkkä, S. Posterior linearization filter: Principles and implementation using sigma points. IEEE Trans. Signal Process.; 2015; 63, pp. 5561-5573. [DOI: https://dx.doi.org/10.1109/TSP.2015.2454485]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Navigation system performance degrades significantly in complex environments. It is important to analyze satellite visibility through 3D terrain modelling and separate the satellite signals propagated by NLOS to suppress the NLOS error. However, the traditional 3D terrain modelling visibility analysis method based on the pure terrain cover angle is only suitable for determining the visibility of GNSS satellites and may incorrectly separate LOS propagate measurement signals from members with low relative ranges and elevation angles under air–ground swarm conditions. To this end, this paper proposes a belief-propagating cooperative navigation method based on air–ground visibility analysis, which avoids mistakenly separating close-range LOS cooperative navigation signals by simultaneously considering the distances, elevation angles, and azimuths of the signal sources relative to the air–ground swarm members. The simulation shows that the cooperative navigation NLOS identification method based on air–ground visibility analysis proposed in this paper can more accurately realize the separation of NLOS signals under cooperative conditions than the traditional pure angular 3D terrain modelling visibility analysis method can, and the localization error of the members to be assisted is significantly reduced.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details

1 Navigation Research Center, College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
2 Navigation Research Center, College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; Shanghai Aerospace Control Technology Research Institute, Shanghai 201109, China