Content area
We have been developing a method to estimate a moving vehicle's position using MEMS sensor data, including acceleration, gyroscope, and geomagnetic sensors. This algorithm has been evaluated on several fixed courses, achieving good results with a position error of less than 1 meter. However, questions remain regarding whether the algorithm can distinguish between lanes and if it is applicable across different drivers and vehicles than those used to obtain reference data. This paper presents evaluation results of the algorithm conducted on a two-lane road, demonstrating its ability to differentiate between lanes. Furthermore, it confirms that the algorithm can accurately estimate the vehicle's position even when tested with different vehicles and drivers.
Abstract: We have been developing a method to estimate a moving vehicle's position using MEMS sensor data, including acceleration, gyroscope, and geomagnetic sensors. This algorithm has been evaluated on several fixed courses, achieving good results with a position error of less than 1 meter. However, questions remain regarding whether the algorithm can distinguish between lanes and if it is applicable across different drivers and vehicles than those used to obtain reference data. This paper presents evaluation results of the algorithm conducted on a two-lane road, demonstrating its ability to differentiate between lanes. Furthermore, it confirms that the algorithm can accurately estimate the vehicle's position even when tested with different vehicles and drivers.
Keywords: Localization, Acceleration, Gyroscope, Geomagnetic, MEMS sensor.
(ProQuest: ... denotes formulae omitted.)
1. Introduction
The author has developed a vehicle localization algorithm that reduces reliance on GNSS. Specifically, sensor data from a reference vehicle (Vehicle A) is recorded in advance using MEMS sensors to capture features such as road surface conditions and vehicle heading. Subsequently, sensor data gathered from another vehicle (Vehicle B), which also uses MEMS sensors but lacks GNSS, is cross-correlated with the reference sensor data. By identifying the timestamp in the reference data that produces the highest correlation value, the precise position information from the RTK-GNSS recorded at that timestamp is used to estimate the position of Vehicle B. The outline of the algorithm is shown in Fig. 1. In preliminary experiments conducted in suburban Tottori, Japan, the algorithm achieved a positioning error of 0.37 m in smooth traffic conditions and about 1.6 m in fluctuating traffic conditions [1-4]. To the authors' knowledge, previous studies have focused on fixed road sections with a single lane in each direction [1-16]. To generalize the algorithm for roads with multiple lanes, evaluation tests were conducted in Ibaraki and Tokyo, verifying whether the algorithm can distinguish between adjacent lanes using MEMS sensor data. This paper is the extended version of the conference paper [1].
2. Results of Experiments
2.1. Results of Multiple Lane Distinction on the Shuto Expressway
In Fig. 2, images of the Tokyo Metropolitan Expressway Route 7 (Shuto Expressway) test course are shown. The course consists of two lanes: a driving lane and a passing lane. Fig. 3 illustrates the overall evaluation test circular course, which includes the Tokyo Metropolitan Expressway Route 7 and is approximately 13 km long.
Fig. 4 also displays the devices used in the evaluation experiments. Fig. 4(a) shows the MEMS sensor InvenSense MPU-9250[17], which contains a 9-axis sensor, including an accelerometer, gyroscope, and geomagnetic sensor. Fig. 4(b) presents the RTK-GNSS module u-blox F9P [18], which achieves centimeter-level precision under favorable conditions and is used to obtain ground truth data for vehicle positioning.
Fig. 5 shows the Toyota Noah 2-liter, 5-door minivan used for data acquisition during the experiments. In this paper, a specific section of the circular course is selected for detailed examination to address the following purposes. The selected section marked by red arrows, shown in Fig. 3 is 3,972 meters long and corresponds to the Metropolitan Highway Route 7.
The objectives of the evaluation experiments are as follows:
1. Can the proposed vehicle localization algorithm identify the specific lane (driving or passing) on which the vehicle is traveling on a multilane road?
2. How accurate is the estimated vehicle position?
To answer these questions, we conducted the following data acquisition runs, summarized in Table 1. The vehicle was driven on the circular test course (Fig. 3) four times. From these runs, we extracted four time-series datasets, each starting and ending at the times specified in Table 1. In Japan, the leftlane is referred to as the "driving lane," while the right lane is known as the "passing lane," where vehicles generally travel at higher speeds. This is evident from the travel time data provided in Table 1.
2.2. Sensor Data Processing
Pitch rate data obtained during TRIP1 and TRIP2, for example, are shown in Fig. 6. Among the nine types of sensor data, the pitch rate contributes the most significantly [4]. During TRIP1 and TRIP2, the vehicle traveled on the same leftlane, and the pitch rate time series are expected to resemble each other. However, since the velocity in TRIP2 is slightly higher than that in TRIP1, the length of the time series differs. Here, the "time of day" in seconds is defined by the following equation:
Time of day = 3600×hour++60×minute+sec (1)
Fig. 7 shows the velocity profiles of TRIP1 and TRIP2. As seen in Fig. 7, the velocity profile is not
constant but varies depending on the driver's characteristics and traffic conditions. The sensor data, in this sense, are velocity-modulated by the velocity profile. The cross-correlation function, calculated from the sensor data under constant velocity conditions, would exhibit a constant time lag between the two sets of vehicle data. In this paper, the first dataset is referred to as the "reference data," while the second dataset is called the ""evaluation data".
For this analysis, the pitch rate data from TRIP1 serve as the reference data, and those from TRIP2 serve as the evaluation data.
Examples of cross-correlation calculations for the two sets of MEMS sensor data (TRIP1 and TRIP2, leftlane) are shown in Fig. 8. Here, we calculated a weighted sum of the nine cross-correlation functions derived from the 9-axis sensor data (pitch, roll, and yaw rates, as well as acceleration and geomagnetic data for the x, y, and z axes). The optimal weights were determined using the algorithm described in [3]. In Fig. 8, the vertical axis represents the time lag, and the horizontal axis represents the travel time of the evaluation vehicle. The initial time lag was set to 0 seconds.
A bright line is visible in Fig. 8, indicating a strong correlation. However, the bright line descends as travel time progresses due to the non-negligible velocity difference between TRIP1 and TRIP2, as shown in Fig. 7. Consequently, the time lag begins at 0 seconds and ends at approximately -30 seconds. This significant change in time lag complicates the design of a noise filter to accurately estimate the optimal time lag from the cross-correlation results.
Fig. 9 illustrates the cross-correlation results for TRIP3 and TRIP4 (right lane). In this case, the time lag does not change as significantly as it does for TRIP1 and TRIP2. However, it still presents challenges for designing an effective noise filter.
2.3. Velocity Compensation and Cross-Correlation Analysis
Fig. 10 shows the pitch rate sensor data after compensating for them according to the velocity profiles, as if the vehicle were traveling at a constant velocity of 60 km/h. The compensation was applied using the following formula:
... (2)
where φ(t) represents a hypothetical time axis under the assumption that the vehicle moves at a constant velocity v_s and τ denotes the real-time axis.
After velocity compensation, the travel times of the evaluation vehicle are adjusted to match those of the reference vehicle. Using this velocity-compensated sensor data, the cross-correlation was recalculated, and the results are shown in Fig. 12. In same-lane cases, such as Figs. 11(a) and 11(b), clear horizontal bright lines appear in the correlation functions. However, in the different-lane case shown in Fig. 11(c), where TRIP1 and TRIP3 data were correlated, the correlation values are very poor. Similarly, other different-lane cases, including TRIP1-4, TRIP2-3, and TRIP2-4, also exhibit poor cross-correlation results.
2.4. Vehicle Localization from Cross-Correlation Functions
After obtaining the cross-correlation functions, the algorithm estimates the optimum time lag profile. Once this profile is identified, the algorithm estimates the location of the evaluation vehicle according to the procedure described in Fig. 1. From the results in Fig. 11(a), (b) and (c), the correlation functions for same-lane and different-lane cases show distinct characteristics, enabling lane distinction.
Fig. 12 shows the maximum correlation plots for TRIP3-4 (same-lane case). These plots form an almost horizontal line, though some noisy data points are present.
Fig. 13 displays the result of applying a simple Kalman filter to Fig. 12, yielding a much clearer horizontal line.
Fig. 14 shows the maximum correlation plots for TRIP2-3 (different-lane case). In this case, the plots are highly noisy, and no effective noise filter can produce a horizontal line like that of Fig. 13.
Fig. 15 presents the histograms of maximum correlation plots for TRIP1-3 (different-lane case) and TRIP3-4 (same-lane case). It is evident that the histogram is much broader for different-lane cases compared to same-lane cases. This difference in histogram shapes allows for distinguishing whether the vehicle is running in the same lane as the reference data. From the time lag estimation, the vehicle's location is derived using the RTK-GNSS location data of the reference vehicle at the corresponding time, as described in Fig. 1.
In Fig. 16(a), the position estimation error over time is shown for the evaluation vehicle running in the leftlane (driving lane), while Fig. 16(b) illustrates the directional error for the same scenario. Similarly, Fig. 17(a) presents the position estimation error over time for the vehicle running in the right lane (passing lane), with Fig. 17(b) displaying the corresponding directional error. Based on the results from Figs. 16(a) and 17(a), the RMSE errors were 1.40 m for the driving lane and 1.41 m for the passing lane, indicating comparable levels of accuracy in both lanes, despite occasional error spikes. However, when the vehicle is running in a different lane, position estimation becomes challenging due to poor cross-correlation. This limitation, nonetheless, highlights a positive aspect of the algorithm-it effectively identifies the running lane. The directional position error remains confined to the vehicle's heading, provided that lane distinction is properly recognized.
3. Impact of Different Driver and Vehicle on the Proposed Algorithm
Although the lane distinction algorithm performed well on the Metropolitan Expressway, a challenge was encountered on National Route 349 in suburban Ibaraki. The course is shown in Fig. 18. The evaluation section, indicated by the red arrow, is a fairly smooth, two-lane bypass road with a central median running through a rural area.
In this course, we tested the effects of different drivers and different vehicles on the proposed vehicle localization algorithm. Driving data was collected, as shown in Table 2. Eight trips were conducted: the first four trips were driven by a 29-year-old female driver using a SUZUKI SWIFT, while the latter four trips were driven by a 68-year-old male driver using a TOYOTA Yaris Cross.
In Table 3, the two vehicles are compared. The TOYOTA Yaris Cross is larger, heavier, and has a longer wheelbase than the SUZUKI SWIFT. These and the suspension characteristics of the vehicles may influence sensor data to varying degrees.
In Table 4, we summarized the evaluation results by using reference data from each of the eight trips and evaluation data from the same eight trips. The results are presented in a matrix form as shown in Table 4.
To summaries the results shown in Table 4, we can say:
i. The accuracy is generally good when using the same vehicle and driver for both reference and evaluation data;
ii. Even if the vehicle or driver differs between reference and evaluation data, as long as the same lane is driven, no significant increase in error is observed;
iii. Even when the driving lanes differ between reference and evaluation data, positioning is possible due to the correlation. However, errors larger than the lane width may occur;
iv. There are cases where the road surface features are not captured, leading to an increase in errors. (indicated by green areas).
4. Conclusions
This study demonstrated that the MEMS-based vehicle localization algorithm can achieve a position accuracy of approximately 1 meter on the Metropolitan Expressway and effectively distinguish between multiple lanes. However, distinguishing adjacent lanes remains challenging when their sensor data are similar. This issue will be addressed in future research. Improving the results obtained in this experiment may require refining the algorithm to better select the most appropriate reference data. Details of the revised algorithm will be presented in future work. A significant advancement observed is that the algorithm proved to be applicable to different vehicles and drivers, even in limited cases.
Acknowledgements
This work was supported in part by the Japan Road Map Association.
References
[1]. T. Yokota, Vehicle localization algorithm with lane discrimination based on inertial and geomagnetic sensor data for GNSS-denied environments, in Proceedings of the 5th Winter IFSA Conference on Automation, Robotics and Communications for Industry 4.0/5.0 (ARCI'25), 2025, pp. 55-61.
[2]. T. Yokota, Network-wide vehicle localization algorithm based on MEMS sensor data, Sensors & Transducers, Vol. 265, Issue 12, 2024, pp. 17-26.
[3]. T. Yokota, Vehicle localization by correlated MEMS sensor data with velocity compensation, in Proceedings of the IEEE 26th International Conference on Intelligent Transportation Systems (ITSC'23), 2023, pp. 1-6.
[4]. T. Yokota, T. Yamagiwa, Vehicle localization by optimally weighted use of MEMS sensor data, in Proceedings of the 3rd IFSA Winter Conference on Automation, Robotics and Communications for Industry 4.0/5.0 (ARCI'23), 2023, pp. 244-249.
[5]. T. Yamagiwa, T. Yokota, Vehicle localization method based on MEMS sensor data comprising pressure, acceleration and angular velocity, in Proceedings of the IEEE International Conference on Mechatronics and Automation (ICMA'22), 2022, pp. 1820-1825.
[6]. T. Yokota, Vehicle localization by altitude data matching in spatial domain and its fusion with dead reckoning, International Journal of Mechatronics and Automation, Vol. 8, Issue 4, 2018, pp. 208-216.
[7]. T. Yokota, Vehicle localization by dynamic programming from altitude and yaw rate time series acquired by MEMS sensor, SICE Journal of Control, Measurement, and System Integration, Vol. 14, Issue 1, 2021, pp. 78-88.
[8]. T. Yokota, Vehicle localization based on MEMS sensor data, in Proceedings of the 60th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE'21), 2021, pp. 1468-1473.
[9]. T. Yokota, Localization algorithm based on altitude time series in GNSS-denied environments, in Proceedings of the SICE Annual Conference (SICE'20), 2020, pp. 952-957.
[10]. T. Yokota, M. Okude, T. Sakamoto, R. Kitahara, Fast and robust map-matching algorithm based on a global measure and dynamic programming for sparse probe data, IET Intelligent Transport Systems, Vol. 13, Issue 11, 2019, pp. 1613-1623.
[11]. J. Tsurushiro, T. Nagaosa, Vehicle localization using its vibration caused by road surface roughness, in Proceedings of the IEEE International Conference on Vehicular Electronics and Safety (ICVES'15), 2015, pp. 164-169.
[12]. A. J. Dean, R. D. Martini, S. N. Brennan, Terrain-based road vehicle localization using particle filters, Vehicle System Dynamics, Vol. 49, Issue 8, 2011, pp. 1209-1223.
[13]. E. Laftchiev, C. Lagoa, S. Brennan, Terrain-based vehicle localization from real-time data using dynamical models, in Proceedings of the American Control Conference (ACC'12), 2012, pp. 1089-1094.
[14]. J. Gim, C. Ahn, Ground feature-based vehicle positioning, in Proceedings of the SICE Annual Conference (SICE'20), 2020, pp. 983-984.
[15]. J. Gim, C. Ahn, IMU-based virtual road profile sensor for vehicle localization, Sensors, Vol. 18, Issue 10, 2018, 3409
[16]. X. Qu, B. Soheilian, N. Paparoditis, Landmark-based localization in urban environment, ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 140, 2018, pp. 90-103.
[17]. InvenSense, MPU-9250 Product Page, https://invensense.tdk.com/products/motion-tracking/ 9-axis/mpu-9250/
[18]. u-blox, ZED-F9P Product Page, https://www.u-blox.com/en/product/zed-f9p-module
© 2025. This work is published under https://creativecommons.org/licenses/by/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.