1. Introduction
As defined by the Society of Automotive Engineers (SAE) International in 2016, autonomous driving technology is subdivided into six levels, depending on the required conditions and functions [1]. Among them, Level 2 is the stage of a driving assistance system that simultaneously performs longitudinal and lateral control, and the vehicle is driven in a partially automated state with the human driver still having to concentrate on driving. Level 3 refers to a stage where, as with Level 2, the vehicle is operated in a partially automated state; however, driver intervention is much reduced. Level 4 or higher performs autonomous driving without driver intervention depending on a given constraint, such as weather conditions. A more advanced autonomous driving system must be implemented to achieve the step after Level 3, in which the driver intervention is gradually reduced.
The autonomous driving system works in the order of recognition, decision, and control; its smooth operation requires a processing system, an electronic control unit, and various sensors [2,3]. In an autonomous driving system, recognition sensors consist of cameras, lidar, radar, and others to provide the relative position of the surrounding objects. Ego-vehicle positioning sensors can include a global navigation satellite system (GNSS) receiver and an inertial measurement unit (IMU) to provide location and status information. Among them, the GNSS sensor receives satellite signals at a low power on the ground, originally transmitted by satellites orbiting at an altitude of >20,000 km. It processes signals to provide the user location, velocity, direction, and time information on a global scale [4]. However, GNSS-equipped vehicles may be affected by multipath or degraded signals and signal blockage due to the signal-receiving environment (i.e., GNSS-denied environment). Moreover, the failure or system malfunction of a GNSS sensor could lead to an unpredictable accident. Therefore, it is necessary to estimate the current position in any situation for safe driving.
Nazaruddin et al. presented the ego-vehicle localization method, based on the IMU sensor in the GNSS disconnection situation, which may be caused by high dependence on the external environment or a low sampling rate. This technique predicts the next coordinates from the previous GNSS and current IMU sensor data. The localization uses an error-state Kalman filter (KF) with a residual long short-term memory model [5]. A similar approach to the GNSS outage using two complementary sensors, such as the IMU and odometer, was proposed in [6], where the odometer and GNSS measurements were exploited to correct IMU errors during GNSS availability. Then, the odometer was used to correct IMU errors during GNSS outages to improve the positioning accuracy while ensuring the continuity of the navigation solution.
Furthermore, various localization methods, such as the visual or laser simultaneous localization and mapping (SLAM) technique, have been proposed using perception sensors, such as cameras, radar, and lidar, cooperatively to achieve better navigational information [7]. Satellite or geographic information system images are used with visual odometry, even with more storage and computational requirements [8,9]. These methods can simultaneously support two functions, such as visually surveilling the required geographical areas and nearby obstacles and estimating the navigational state (e.g., position, velocity, and attitude), especially in GNSS-denied environments.
However, these methods may have the drawback that the predetermined position information of surrounding objects is required for localization. To overcome this limitation, vehicle-to-everything (V2X) communication [10], where status information including the position of the surrounding objects can be exchanged, is ideally suited for sensor fusion. A radar-based ego-vehicle localization method using vehicle-to-vehicle (V2V) communication was proposed in [11]. In GNSS-denied situations, this method estimates the absolute coordinates of the ego-vehicle by combining the relative coordinates of the object vehicles obtained using radar with the absolute coordinates of the object vehicles received through V2V communication. Ma et al. presented a vehicle-to-infrastructure (V2I)-based vehicle localization algorithm, where a low-cost IMU-assisted single roadside unit and a GNSS receiver-based localization algorithm based on the least-squares method were used to reduce the deployment costs [12]. Perea-Strom et al. presented a fusion method of wheel odometry, IMU, GNSS, and lidar using the adaptive Monte Carlo localization algorithm, where a particle weighting model to integrate GNSS measurements was adopted to perform ego-vehicle localization in a 2D map [13].
One way to improve the accuracy of localization algorithms from surrounding objects is to increase their confidence in the surrounding objects. To do this, it is necessary to apply a tracking algorithm to the surrounding objects. The KF-based tracking algorithm using 2D or 3D object information obtained through the camera and lidar is widely used for continuous object tracking [14]. In addition, a study on model-free tracking based on the support vector machine classifier learned with the histogram of oriented gradients feature was performed [15]. Research on the object-tracking algorithm incorporated with deep learning is still underway. For example, a convolutional neural network was used as a feature extractor, and more accurate tracking was performed through the adaptive hedge method [16]. Furthermore, the two-stage Siamese re-detection architecture and tracklet-based dynamic programming algorithm were combined for the re-detection and tracking of objects even after long occlusion [17].
This paper presents a localization method for the ego-vehicle in GNSS-denied environments, using lidar and V2X communication cooperatively. Sensor fusion and localization for surrounding vehicles and objects, and the ego-vehicle using the lidar sensor and V2X communication are presented. Multiobject tracking (MOT) using an extended KF (EKF) and a localization method using a particle filter (PF) are described. An autonomous driving simulator modeled by each sensor is used to verify this, and comparative verification is performed based on the truth and predicted trajectory obtained from the simulator.
2. Sensor Fusion and Localization
2.1. Overview
The cooperative localization presented in this paper is a two-step approach. In the first step, the method estimates the coordinates of the surrounding vehicles and objects based on the lidar sensor and V2X communication data. In the second step, the method estimates the ego-vehicle coordinates based on the results of the first step when the GNSS signal is unavailable. Figure 1 is the block diagram of the cooperative localization process. It consists of data processing module, sensor fusion module, and localization module. The first two are for localizing surrounding vehicles and objects, and the latter is for the ego-vehicle. For simplicity, in this paper, only the surrounding vehicles are assumed to be used, but the same metric can be applied to the surrounding objects.
In the data processing module, the lidar sensor recognizes surrounding vehicles and their localization relative to the ego-vehicle. Simultaneously, the status information for the surrounding vehicles is independently obtained through the V2X (in fact, V2V) communication. Then, surrounding vehicle information obtained from the data processing module is merged in the sensor fusion module to perform highly reliable MOT in real time. In the localization module, the ego-vehicle coordinates are estimated through the PF using the MOT results of the sensor fusion module.
2.2. Data Processing
The data processing module of cooperative localization processes the lidar sensor output and V2X communication messages independently. It detects the surrounding vehicles based on the point cloud data (PCD) of the lidar sensor and receives a basic safety message (BSM) of V2X communication to obtain the surrounding vehicle data [18]. To recognize the vehicle from the PCD, a series of processes such as ground filtering, clustering, and classification should be performed [19]. Also, the L-shape fitting method is applied to vehicle detection and tracking [20].
Figure 2 is the entire process to find the position and heading of the surrounding vehicle in PCD. First, a Gradient-based ground removal algorithm is applied for ground filtering [21]. There is little difference in point gradient between channels on the ground, but it is used as there is a large difference in the slope in objects. Vehicle candidate clustering for the filtered object point is performed by applying a voxel grid. After that, clustering between adjacent voxels is performed, and a vehicle cluster candidate group is selected based on the size of the cluster generated at this time. Finally, the L-shape fitting is performed to extract the heading and position information of the vehicle candidate cluster. In fact, the L-shape fitting is the problem of defining two perpendicular lines for each segmented cluster of points based on the L-shape rectangle model assumption for vehicle detection from lidar data. This algorithm finds a rectangle that fits best to all points contained in the vehicle cluster by using all possible orientation of the rectangle, which allows us to obtain L-shaped and 2D boundary boxes in the vehicle cluster.
The V2X communication receives the surrounding vehicle states, including the coordinates, such as the heading, velocity, and vehicle size, via BSM messages. For simplicity, in this paper, only the position on the 2D plane is assumed. The coordinates of a surrounding vehicle in the global coordinates (i.e., earth-centered-earth-fixed or latitude-longitude-height) are converted to local coordinates on the 2D plane (i.e., north-east-down coordinates) of which origin is at the ego-vehicle via the lidar sensor output, where the lidar sensor is assumed to be mounted at the center of the ego-vehicle so that it has the same coordinate system as the ego-vehicle’s body frame. The heading angle of the ego-vehicle is used to obtain the local coordinates of the surrounding vehicle as follows:
(1)
where (, and (N, E) represent the surrounding vehicle coordinates at the ego-vehicle’s body coordinates and at the local north-east-down coordinates, respectively, and θ is the heading angle of the ego-vehicle. Figure 3 shows the relationship between coordinate systems. Figure 4 shows the output of each sensor after matching the coordinate system, and the surrounding vehicle information is output from the same coordinate system.2.3. Sensor Fusion for Surrounding Vehicles
Sensor fusion is a method of processing surrounding environment information by fusing data output from multiple sensors [22]. Through this, reliable data with reduced false detection probability can be extracted. Recently, various research efforts have been underway to converge heterogeneous sensors in various ways to improve the performance of autonomous driving systems. Sensor fusion can be classified as early or late fusion [23]. Early fusion is a preprocessing method that integrates unimodal features based on the raw data output from each sensor. However, it has the drawback that fusion results become ambiguous due to sensitive responses to sensor failure or noise. Late fusion is a post-processing method based on preprocessed sensor data, which has the advantage that any sensor failures do not significantly affect the remaining processes. In this paper, a late fusion method based on the simple online and real-time tracking (SORT) algorithm using post-processed data from the lidar sensor and V2X communication (mainly for MOT) is used to minimize the effect caused by sensor defects [24]. As depicted in Figure 5, SORT is used with the KF and Hungarian algorithms.
In this paper, an EKF and intersection over union (IoU)-based SORT are employed to predict the nonlinear movement direction, as displayed in Figure 6 [25].
Prediction in the sensor fusion module is performed based on the information of the surrounding vehicle from the data processing module to predict the surrounding vehicle’s next position and heading, as illustrated in Figure 6. After that, the IoU matching is conducted to match the prediction and the previous information. Finally, the next moving paths are updated using the matched information of the surrounding vehicles.
First, the state vector of the EKF is the result of tracking the th surrounding vehicle:
(2)
where and represent surround vehicle coordinates, is the velocity, is the heading, and denotes the vehicle identifier. The corresponding value is obtained by taking weighted mean for the value calculated in the process of lidar and V2X module. Based on the calculated value of lidar module with high distance accuracy, V2X module is given a high weight if the position of the vehicle acquired by V2X module is similar to the calculated result of lidar, and a low weight if the difference is large. The matrix used for EKF can be expressed as follows:(3)
where denotes the state transition matrix, represents the measurement transition matrix, is the measurement vector, and indicates the time index. Next, the prediction is performed using Equation (4):(4)
where and denote the predicted state vector and its error covariance for the th surrounding vehicle, and is defined as the system noise covariance matrix. Based on the predicted coordinates of the surrounding vehicles, the IoU should be calculated to match the previously tracked surrounding vehicle coordinates [26]. As presented in Figure 4, to create a 2D bounding box, the coordinates, heading, and vehicle size of the surrounding vehicles at and are used. Then, matching and unmatching are performed by designating the IoU threshold value. As in Equation (5), the intersection of the predicted and the previously tracked is divided into the union. Next, classification proceeds by acquiring an IoU score to compare with the threshold as follows:(5)
If the score is higher than the threshold, they are the same vehicle, in which case the ID of the previous vehicle is assigned to the predicted vehicle coordinates. Otherwise, it is an unclear vehicle and is deleted after a certain time. In addition, if the score is 0, it is a new vehicle, and a new track is created. When the track list is updated, the last update of the EKF proceeds with Equation (6):
(6)
where is the Kalman gain, denotes the measurement noise covariance matrix, and and represent the updated state vector and error covariance of the th surrounding vehicle.2.4. Localization for the Ego-vehicle
Localization of the ego-vehicle is a core technology for autonomous driving. It estimates the coordinates and direction of the ego-vehicle in the space using vehicle state information when the GNSS signal is strong or the surrounding environment information [27]. As a localization technique using the surrounding environment information, particularly for GNSS-denied environments, various localization techniques based on radio waves using the time difference of arrival, angle of arrival, or received signal strength indication additionally corrected by KF or PF have been proposed [28,29,30]. In addition, a typical localization technique based on the coordinates of surrounding objects is SLAM. This method uses algorithms suitable for each sensor to estimate the coordinates of the ego-vehicle by comparing the characteristics of the current environment sensed by various sensors with the previously generated map [31,32,33,34].
In the localization of the proposed method, the ego-vehicle must be localized based on the surrounding vehicle coordinates received from the sensor fusion. Therefore, we used a PF that precisely localizes the ego-vehicle using a landmark and its own state information. Figure 7 depicts localizing the ego-vehicle based on the fused surrounding vehicle list. The heading, velocity, and yaw rate of the ego-vehicle, which can be extracted from the in-vehicle sensors (e.g., inertial sensors), and the surrounding vehicle coordinates are processed in the PF to update the ego-vehicle coordinates.
State vector , state transition matrix , control matrix , and control vector are used to localize the ego-vehicle in the PF as follows:
(7)
where , , , , and denote the ego-vehicle coordinates, heading, velocity, and yaw rate. When the GNSS sensor of the ego-vehicle works well, and are always updated, but when the GNSS sensor is disconnected or in a denied environment, the state vector is only composed of the previous and . The particles of the initial PF include and the weight, as presented in Equation (8):(8)
where represents the number of particles, and indicates a set of particles and weights. In addition, the PF method in this paper must consider the effects of noise. Therefore, the process of predicting is calculated by including noise in , as in Equation (9):(9)
where is a function that generates random Gaussian noise, and denotes the predicted th particle state vector. Next, the weights of the PF must be updated to obtain the optimal ego-vehicle coordinates. Equation (10) is an expression that calculates weights based on the ego and surrounding vehicle coordinates. After calculating the difference between the coordinates of the predicted particle state vector and the surrounding vehicle, the weights are updated using the normal distribution function.(10)
where and represent the normal distribution function of each coordinate, is the number of surrounding vehicles, and is the standard deviation. Finally, by summing the product of the updated weights and predicted particle state vector, the optimal ego-vehicle coordinates are calculated using Equation (11):(11)
where represents the updated state vector, and denotes the number of new particles. In addition, new particles proportional to the weights are generated using the systematic resampling method [35].3. Experiments
3.1. Experimental Environment Configuration
A GNSS-blocking scenario must be set in advance to verify the cooperative localization technique presented in this paper, which introduces a high risk of accidents during autonomous driving. Therefore, an experimental environment was constructed using an autonomous driving simulation tool based on a commercial game engine to solve these constraints [36]. This simulation tool is divided into various parts, as shown in Figure 8.
The main driving simulation includes the simulation module, where an experiment scenario can be configured. It is possible to freely set a nonplayer character artificial intelligence in the commercial game, static, and dynamic objects, weather or road environment, and so on. The analysis module analyzes the object recognition rate or path planning error to test the autonomous driving algorithm. The sensor module allows the checking of the output data of various sensors, such as cameras, lidar, radar, ultrasonic, GNSS, IMUs, and others, modeled in the autonomous driving simulator with a graphical user interface. In addition, the vehicle dynamic simulation function, virtual map, vehicle, object, external conditions, and other factors can be set as similar to the real environment, so it has safe conditions for autonomous driving experiments.
The multiuser part is an independent HW environment with driving simulator functions. It is expandable to multiple computers connected to a main driving simulation server, where multiple users (or clients) can drive in a single scenario in the main server [37], which is ideally suited for implementing a function to perform V2X communication with other clients. Each connected client transmits probe vehicle data and BSM information to the server in a multithreaded transmission control protocol/internet protocol, and automatically transmits and receives BSM messages through user datagram protocol communication for each client. The V2X message of the autonomous driving simulator consists of a header and a payload, as specified in the SAE J2735 standard, and the message is transmitted and received in the same period [38].
The GNSS hardware-in-the-loop simulation (HILS) comprises a hardware device consisting of a GNSS generator and receiver. The GNSS signal generator receives the ego-vehicle coordinates from the driving simulator and emits the corresponding GNSS radio frequency (RF) signals to air in real time. The GNSS receiver receives and processes GNSS RF signals to provide the ego-vehicle coordinates, which are used to intentionally generate a malfunction of the autonomous driving system that may occur when the GNSS sensor of the autonomous vehicle is affected by an RF interference signal or blockage. The driving route of the autonomous driving simulator equipped with the GNSS HILS can be logged on the electronic map in real time. In addition, a small jammer is installed independently to emulate the GNSS-denied environment [39].
3.2. Sensor Fusion and Multiobject Tracking Result of Surrounding Vehicles
Figure 9a is a screenshot of an autonomous driving test scenario implemented using a driving simulator. Figure 9b depicts the direction of the surrounding vehicles driving from an aerial view. We generated four vehicles that traveled simultaneously along a given route in the simulator and assigned vehicle IDs according to their distance in close order to the ego-vehicle. After that, we proceeded with the experiment using the ground truth position information of the surrounding vehicles provided by the simulator. It is noted that the driving simulator has a capability to obtain the true information of surrounding objects as well as the ego-vehicle itself in the virtual world by accessing the internal memory of the software.
The IoU threshold was set to vary from 0.1 to 0.9, and the surrounding vehicles were set to MOT within the range of lidar measurements to verify the sensor fusion method for the surrounding vehicles. Figure 10 presents the performance evaluation with the precision and recall of the MOT measurement results for different IoU thresholds [40]. Precision is the ratio of what the model classifies as true to what is actually true, whereas recall is the ratio of what the model predicts as true to what is actually true. In the case of an IoU of 0.1, the detection rate of 80% or more was achieved compared to the actual vehicle, but in terms of accuracy, about 60% of the precision was achieved. In contrast, for an IoU of 0.9, an accuracy of 85% or more was achieved, but a detection rate of 6% or less was measured. As a result, as the IoU increases, the precision and detection rate are inversely proportional. Therefore, this paper set a 0.5 IoU threshold with a relatively small deviation.
Next, to analyze the performance of the sensor fusion presented in this paper, experiments were conducted based on the same scenario as in Figure 9. Figure 11 shows the predicted vehicle trajectory for Vehicle 1 with the ground truth data. This confirms that our algorithm tracks the vehicle traveling along the established route well. Table 1 shows the results of performing MOT on the test scenario in Figure 9 by vehicle ID. In this experiment, as the distance from the ego-vehicle increases (i.e., the vehicle ID increases), the root-mean-square error (RMSE) also increases, as expected from the characteristics of the lidar sensor.
3.3. Localization Results for the Ego-vehicle
Next, a curved driving path scenario was constructed to verify the localization method for the ego-vehicle in the cooperative technique. Figure 12 is a satellite and road view to demonstrate the route for the experiment. The ego-vehicle drives from the starting point to the goal point in Figure 12b, and the surrounding vehicles are placed around the driving path. The jammer is turned on when the ego-vehicle drives through the GNSS-denied point.
Figure 13 displays the results of the localization. The black dots mark the coordinates of the surrounding vehicles, and the blue line indicates the trajectory of the ego-vehicle. The other colored lines indicate the trajectories according to the number of tracked surrounding vehicles. The ego-vehicle driven while tracking one vehicle predicts an unclear trajectory. However, as the number of surrounding vehicles increases, the actual and predicted trajectories become closer. Table 2 lists the numerical results of each trajectory as the RMSE. The positioning error decreases as the number of vehicles increases.
Finally, the trajectory errors of the different geometrical distributions of the surrounding vehicles are analyzed according to the distance and position of the surrounding vehicles. The trajectory varies depending on the location of the objects when localization is performed based on the surrounding objects. The scenario constructed to proceed with the experiment is presented in Figure 14.
Four experiments were performed for each scenario to measure the appropriate number of particles in a PF used in localization. Figure 15 displays the localization results where the black line and dots are the actual trajectories of the ego and surrounding vehicles, respectively. After adjusting the number of particles, the predicted trajectory result is presented as a colored line. In Scenarios A to C, the surrounding vehicles are on both sides of the actual trajectory but located gradually closer to the driving route. In Scenario D, the surrounding vehicles are clustered on the left.
Figure 16 depicts the results of analyzing the position error for each trajectory. Scenarios A and B each produced an error within 1 m, whereas maximum errors of 2 and 4 m were obtained in C and D, respectively. In addition, when the number of particles increases, the errors for C and D decrease relatively rapidly.
In each scenario, the position dilution of precision is calculated to analyze the localization error effect according to the sensor network arrangement, which is also used to check the evenness of the surrounding vehicle arrangement [41]. Figure 17 presents the results of each scenario run with 100 particles from the start of driving to 100 s. Scenarios A and B converge to zero, but the maximum errors of 2.6 and 5.3 occurred in C and D, respectively. Through this experiment, the position is estimated accurately when the surrounding vehicles are evenly distributed. In contrast, the error increases as the surrounding vehicles are gathered, or the distance from the ego-vehicle increases.
Additionally, Table 3 provides the RMSE results listed by the number of particles and the average localization time for each scenario. Similarly, Scenario A, where the surrounding vehicles are evenly distributed, has a low error, whereas the dense Scenario D has a large error. In addition, as the number of particles increases, the localization update period is higher. Through this process, if the number of particles is adjusted considering the arrangement of the surrounding vehicles, localization is performed close to the actual trajectory.
4. Conclusions
This paper presents a lidar sensor and V2X communication-based cooperative localization technique for autonomous vehicles. The proposed cooperative localization method comprises sensor fusion for surrounding vehicles and objects, and localization for the ego-vehicle. For the sensor fusion of the lidar output and V2X data for surrounding vehicles and objects, the SORT algorithm with the EKF in the late fusion manner was employed with the IoU-based MOT to minimize the sensor defect effect. The localization of the ego-vehicle using the information from the sensor fusion for the surrounding vehicles and objects was adopted in the method. The PF was employed to precisely localize the ego-vehicle based on the fused surrounding vehicle list. The cooperative localization method is expected to be ideally suited for autonomous vehicles, especially in a GNSS-denied environment.
Algorithm development and verification were performed using an autonomous driving simulator, with a GNSS HILS and a jammer intentionally used in the system to configure the GNSS-denied environment to evaluate the cooperative localization technique. The coordinates of the surrounding vehicles were tracked with high accuracy using sensor fusion performing MOT. Different localization phenomena were captured in the localization according to the number of tracked surrounding vehicles, the distance to the ego-vehicle, and the density.
The simulation in this paper included only the case of static surrounding vehicles; however, the synchronization between lidar and V2X information may be problematic when the surrounding vehicles and the ego-vehicle are running simultaneously. In further works, we will conduct experimental tests incorporating a deep learning method into the object tracking algorithm for moving surrounding vehicles.
Conceptualization, J.-H.W.; methodology, M.-S.K.; software, M.-S.K. and J.-U.I.; validation, M.-S.K. and J.-H.W.; formal analysis, M.-S.K., J.-U.I. and J.-H.W.; investigation, M.-S.K.; resources, M.-S.K., J.-H.A. and J.-U.I.; data curation, M.-S.K.; writing—original draft preparation, M.-S.K. and J.-H.A.; writing—review and editing, J.-U.I. and J.-H.W.; supervision, J.-H.W.; project administration, J.-H.W.; funding acquisition, J.-H.W. All authors have read and agreed to the published version of the manuscript.
The authors declare that the research was conducted in the absence of anycommercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 2. Process to find the position and heading of the surrounding vehicle using point cloud data (Red dots).
Figure 5. Sensor fusion: (a) simple online and real-time tracking (SORT) and (b) fusion output. Black box as vehicle contour [1,2,4].
Figure 6. Block diagram of the sensor fusion module for tracking the surrounding vehicles.
Figure 10. Multiobject tracking results for surrounding vehicles: (a) precision and (b) recall [1,2,3,4].
Figure 12. Ego-vehicle localization scenario: (a) satellite view and (b) road view.
Figure 13. Localization results for the ego-vehicle with the number of surrounding vehicles [1,2,3,4].
Figure 14. Four scenarios for vehicle localization testing: (a) satellite view and (b) road view.
Figure 15. Localization technique results according to vehicle arrangement and the number of particles.
Figure 16. Localization technique position errors according to the vehicle arrangement and number of particles.
Results of root mean square error by vehicle ID.
| ID | X (m) | Y (m) |
|---|---|---|
| 1 | 0.31 | 0.22 |
| 2 | 0.34 | 0.33 |
| 3 | 0.42 | 0.38 |
| 4 | 0.47 | 0.42 |
Localization performance according to the number of surrounding vehicles with the root-mean-square error.
| No. of Vehicles | X (m) | Y (m) |
|---|---|---|
| 1 | 7.19 | 12.86 |
| 2 | 1.97 | 1.11 |
| 3 | 0.39 | 0.31 |
| 4 | 0.21 | 0.17 |
Root-mean-square errors of the localization technique according to the vehicle arrangement and number of particles.
| No. of Particles | Time (ms) | Scenario A | Scenario B | Scenario C | Scenario D | ||||
|---|---|---|---|---|---|---|---|---|---|
| X (m) | Y (m) | X (m) | Y (m) | X (m) | Y (m) | X (m) | Y (m) | ||
| Particles: 100 | 4.2 | 0.18 | 0.17 | 0.18 | 0.25 | 0.64 | 0.93 | 1.86 | 2.31 |
| Particles: 500 | 16.7 | 0.17 | 0.15 | 0.17 | 0.23 | 0.34 | 0.59 | 0.99 | 1.5 |
| Particles: 1000 | 42.3 | 0.16 | 0.14 | 0.17 | 0.22 | 0.31 | 0.55 | 0.95 | 1.26 |
| Particles: 2000 | 81.4 | 0.15 | 0.13 | 0.15 | 0.16 | 0.28 | 0.47 | 0.76 | 1.08 |
References
1. Koopman, P.; Wagner, M. Autonomous vehicle safety: An interdisciplinary challenge. IEEE Intell. Transp. Syst. Mag.; 2017; 9, pp. 90-96. [DOI: https://dx.doi.org/10.1109/MITS.2016.2583491]
2. Van Brummelen, J.; O’Brien, M.; Gruyer, D.; Najjaran, H. Autonomous vehicle perception: The technology of today and tomorrow. Transp. Res. Part C Emerg. Technol.; 2018; 89, pp. 384-406. [DOI: https://dx.doi.org/10.1016/j.trc.2018.02.012]
3. Varghese, J.Z.; Boone, R.G. Overview of autonomous vehicle sensors and systems. Proceedings of the 2015 International Conference on Operations Excellence and Service Engineering; Orlando, FL, USA, 10–11 September 2015; pp. 178-191.
4. Kaplan, E.D.; Hegarty, C. Understanding GPS/GNSS: Principles and Applications; Artech House: Norwood, MA, USA, 2017.
5. Nazaruddin, Y.Y.; Maani, F.A.; Muten, E.R.; Sanjaya, P.W.; Tjahjono, G.; Oktavianus, J.A. An Approach for the Localization Method of Autonomous Vehicles in the Event of Missing GNSS Information. Proceedings of the 2021 60th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE); Tokyo, Japan, 8–10 September 2021; pp. 881-886.
6. Aftatah, M.; Lahrech, A.; Abounada, A.; Soulhi, A. GPS/INS/Odometer data fusion for land vehicle localization in GPS denied environment. Mod. Appl. Sci.; 2016; 11, 62. [DOI: https://dx.doi.org/10.5539/mas.v11n1p62]
7. Magree, D.; Johnson, E.N. Combined Laser and Vision aided Inertial Navigation for an Indoor Unmanned Aerial Vehicle. Proceedings of the 2014 American Control Conference; Portland, OR, USA, 4–6 June 2014; pp. 1900-1905.
8. Conte, G.; Doherty, P. An Integrated UAV Navigation System Based on Aerial Image Matching. Proceedings of the 2008 IEEE Aerospace Conference; Big Sky, MT, USA, 1–8 March 2008; pp. 1-10.
9. Gu, D.Y.; Zhu, C.F.; Guo, J.; Li, S.X.; Chang, H.X. Vision-aided UAV navigation using GIS data. In Proceedings of 2010 IEEE International Conference on Vehicular Electronics and Safety; Qingdao, China, 15–17 July 2010; pp. 78-82.
10. Abboud, K.; Omar, H.A.; Zhuang, W. Interworking of DSRC and Cellular Network Technologies for V2X Communications: A Survey. IEEE Trans. Veh. Technol.; 2016; 65, pp. 9457-9470. [DOI: https://dx.doi.org/10.1109/TVT.2016.2591558]
11. Kim, J.S.; Choi, W.Y.; Jeong, Y.W.; Chung, C.C. Vehicle Localization Using Radar Calibration with Disconnected GPS. Proceedings of the 2021 21st International Conference on Control, Automation and Systems (ICCAS); Jeju, Republic of Korea, 12–15 October 2021; pp. 1371-1376.
12. Ma, S.; Wen, F.; Zhao, X.; Wang, Z.M.; Yang, D. An efficient V2X based vehicle localization using single RSU and single receiver. IEEE Access; 2019; 7, pp. 46114-46121. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2909796]
13. Perea-Strom, D.; Morell, A.; Toledo, J.; Acosta, L. GNSS integration in the localization system of an autonomous vehicle based on Particle Weighting. IEEE Sens. J.; 2020; 20, pp. 3314-3323. [DOI: https://dx.doi.org/10.1109/JSEN.2019.2955210]
14. Asvadi, A.; Girao, P.; Peixoto, P.; Nunes, U. 3D object tracking using RGB and LIDAR data. Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC); Rio de Janeiro, Brazil, 1–4 November 2016; pp. 1255-1260.
15. Qi, Y.; Yao, H.; Sun, X.; Sun, X.; Zhang, Y.; Huang, Q. Structure-aware multi-object discovery for weakly supervised tracking. Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP); Paris, France, 27–30 October 2014; pp. 466-470.
16. Qi, Y.; Zhang, S.; Qin, L.; Yao, H.; Huang, Q.; Lim, J.; Yang, M.H. Hedged deep tracking. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Las Vegas, NV, USA, 27–30 June 2016; pp. 4303-4311.
17. Voigtlaender, P.; Luiten, J.; Torr, P.H.; Leibe, B. Siam r-cnn: Visual tracking by re-detection. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; Seattle, WA, USA, 13–19 June 2020; pp. 6578-6588.
18. Fujii, S.; Fujita, A.; Umedu, T.; Kaneda, S.; Yamaguchi, H.; Higashino, T.; Takai, M. Cooperative vehicle positioning via V2V communications and onboard sensors. Proceedings of the 2011 IEEE Vehicular Technology Conference (VTC Fall); San Francisco, CA, USA, 5–8 September 2011; pp. 1-5.
19. Li, Y.; Ibanez-Guzman, J. Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems. IEEE Signal Process. Mag.; 2020; 37, pp. 50-61. [DOI: https://dx.doi.org/10.1109/MSP.2020.2973615]
20. Zhang, X.; Xu, W.; Dong, C.; Dolan, J.M. Efficient L-shape fitting for vehicle detection using laser scanners. Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV); Los Angeles, CA, USA, 11–14 June 2017; pp. 54-59.
21. Kim, J.K.; Kim, J.W.; Kim, J.H.; Jung, T.H.; Park, Y.J.; Ko, Y.H.; Jung, S. Experimental studies of autonomous driving of a vehicle on the road using LiDAR and DGPS. Proceedings of the 2015 15th International Conference on Control, Automation and Systems (ICCAS); Busan, Republic of Korea, 13–16 October 2015; pp. 1366-1369.
22. Wu, H.; Siegel, M.; Stiefelhagen, R.; Yang, J. Sensor fusion using Dempster-Shafer theory [for context-aware HCI]. IMTC/2002. Proceedings of the 19th IEEE Instrumentation and Measurement Technology Conference (IEEE Cat. No.00CH37276); Anchorage, AK, USA, 21–23 May 2002; Volume 1, pp. 7-12.
23. Snoek, C.G.M.; Worring, M.; Smeulders, A.W.M. Early versus late fusion in semantic video analysis. Proceedings of the 13th Annual ACM International Conference on Multimedia; Singapore, 6–11 November 2005; pp. 399-402.
24. Wojke, N.; Bewley, A.; Paulus, D. Simple online and realtime tracking with a deep association metric. Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP); Beijing, China, 17–20 September 2017; pp. 3645-3649.
25. Mauri, A.; Khemmar, R.; Decoux, B.; Ragot, N.; Rossi, R.; Trabelsi, R.; Boutteau, R.; Ertaud, J.Y.; Savatier, X. Deep Learning for Real-Time 3D Multi-Object Detection, Localisation, and Tracking: Application to Smart Mobility. Sensors; 2020; 20, 532. [DOI: https://dx.doi.org/10.3390/s20020532] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31963641]
26. Zhou, D.; Fang, J.; Song, X.; Guan, C.; Yin, J.; Dai, Y.; Yang, R. Iou loss for 2d/3d object detection. Proceedings of the 2019 International Conference on 3D Vision (3DV); Québec, QC, Canada, 16–19 September 2019; pp. 85-94.
27. Kramer, B.; MacKinnon, A. Localization: Theory and experiment. Rep. Prog. Phys.; 1993; 56, 1469. [DOI: https://dx.doi.org/10.1088/0034-4885/56/12/001]
28. Kaune, R. Accuracy studies for TDOA and TOA localization. Proceedings of the 2012 15th International Conference on Information Fusion; Singapore, 9–12 July 2012; pp. 408-415.
29. Xue, W.; Qiu, W.; Hua, X.; Yu, K. Improved Wi-Fi RSSI measurement for indoor localization. IEEE Sens. J.; 2017; 17, pp. 2224-2230. [DOI: https://dx.doi.org/10.1109/JSEN.2017.2660522]
30. Wann, C.D.; Yeh, Y.J.; Hsueh, C.S. Hybrid TDOA/AOA indoor positioning and tracking using extended Kalman filters. Proceedings of the 2006 IEEE 63rd Vehicular Technology Conference; Melbourne, Australia, 7–10 May 2006; pp. 1058-1062.
31. Bailey, T.; Nieto, J.; Guivant, J.; Stevens, M.; Nebot, E. Consistency of the EKF-SLAM algorithm. Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems; Beijing, China, 9–15 October 2006; pp. 3562-3568.
32. Wen, W.; Hsu, L.T.; Zhang, G. Performance analysis of NDT-based graph SLAM for autonomous vehicle in diverse typical driving scenarios of Hong Kong. Sensors; 2018; 18, 3928. [DOI: https://dx.doi.org/10.3390/s18113928] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30441784]
33. Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot.; 2015; 31, pp. 1147-1163. [DOI: https://dx.doi.org/10.1109/TRO.2015.2463671]
34. Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. Robot. Sci. Syst.; 2014; 2, pp. 1-9.
35. Hol, J.D.; Schon, T.B.; Gustafsson, F. On resampling algorithms for particle filters. Proceedings of the 2006 IEEE Nonlinear Statistical Signal Processing Workshop; Cambridge, UK, 13–15 September 2006; pp. 79-82.
36. Lim, J.U.; Kang, M.S.; Park, D.H.; Won, J.H. Development of Commercial Game Engine-based Low Cost Driving Simulator for Researches on Autonomous Driving Artificial Intelligent Algorithms. Korean ITS J.; 2021; 20, pp. 242-263.
37. FiveM. Available online: https://fivem.net/ (accessed on 6 September 2022).
38. Kenney, J.B. Dedicated short-range communications (DSRC) standards in the United States. Proc. IEEE; 2011; 99, pp. 1162-1182. [DOI: https://dx.doi.org/10.1109/JPROC.2011.2132790]
39. Jammer4uk. Available online: https://www.jammer4uk.com/product/2017-usb-gps-jammer-usb-gps-l1-l2-blocker (accessed on 6 September 2022).
40. Davis, J.; Goadrich, M. The relationship between Precision-Recall and ROC curves. Proceedings of the 23rd International Conference on Machine Learning; Pittsburgh, PA, USA, 25–29 June 2006; pp. 233-240.
41. Langley, R.B. Dilution of precision. GPS World; 1999; 10, pp. 52-59.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Autonomous vehicles are equipped with multiple heterogeneous sensors and drive while processing data from each sensor in real time. Among the sensors, the global navigation satellite system (GNSS) is essential to the localization of the vehicle itself. However, if a GNSS-denied situation occurs while driving, the accident risk may be high due to the degradation of the vehicle positioning performance. This paper presents a cooperative positioning technique based on the lidar sensor and vehicle-to-everything (V2X) communication. The ego-vehicle continuously tracks surrounding vehicles and objects, and localizes itself using tracking information from the surroundings, especially in GNSS-denied situations. We present the effectiveness of the cooperative positioning technique by constructing a GNSS-denied case during autonomous driving. A numerical simulation using a driving simulator is included in the paper to evaluate and verify the proposed method in various scenarios.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer





