Content area

Abstract

Localization is among the most important prerequisites for autonomous navigation. Vision-based systems have got great attention in recent years due to numerous camera advantages over other sensors. Reducing the computational burden of such systems is an active research area making them applicable to resource-constrained systems. This paper aims to propose and compare a fast monocular approach, named ARM-VO, with two state-of-the-art algorithms, LibViso2 and ORB-SLAM2, on Raspberry Pi 3. The approach is a sequential frame-to-frame scheme that extracts a sparse set of well-distributed features and tracks them in upcoming frames using Kanade–Lucas–Tomasi tracker. A robust model selection is used to avoid degenerate cases of fundamental matrix. Scale ambiguity is resolved by incorporating known camera height above ground. The method is open-sourced [https://github.com/zanazakaryaie/ARM-VO] and implemented in ROS mostly using NEON C intrinsics while exploiting the multi-core architecture of the CPU. Experiments on KITTI dataset showed that ARM-VO is 4–5 times faster and is the only method that can work almost real-time on Raspberry Pi 3. It achieves significantly better results than LibViso2 and is ranked second after ORB-SLAM2 in terms of accuracy.

Details

Title
ARM-VO: an efficient monocular visual odometry for ground vehicles on ARM CPUs
Author
Nejad, Zana Zakaryaie 1 ; Ahmadabadian, Ali Hosseininaveh 1   VIAFID ORCID Logo 

 The Faculty of Geodesy and Geomatics, K. N. Toosi University of Technology, Tehran, Iran 
Pages
1061-1070
Publication year
2019
Publication date
Sep 2019
Publisher
Springer Nature B.V.
ISSN
09328092
e-ISSN
14321769
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2277254897
Copyright
Machine Vision and Applications is a copyright of Springer, (2019). All Rights Reserved.