Full text

Turn on search term navigation

© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Existing RGB image-based object detection methods achieve high accuracy when objects are static or in quasi-static conditions but demonstrate degraded performance with fast-moving objects due to motion blur artifacts. Moreover, state-of-the-art deep learning methods, which rely on RGB images as input, necessitate training and inference on high-performance graphics cards. These cards are not only bulky and power-hungry but also challenging to deploy on compact robotic platforms. Fortunately, the emergence of event cameras, inspired by biological vision, provides a promising solution to these limitations. These cameras offer low latency, minimal motion blur, and non-redundant outputs, making them well suited for dynamic obstacle detection. Building on these advantages, a novel methodology was developed through the fusion of events with depth to address the challenge of dynamic object detection. Initially, an adaptive temporal sampling window was implemented to selectively acquire event data and supplementary information, contingent upon the presence of objects within the visual field. Subsequently, a warping transformation was applied to the event data, effectively eliminating artifacts induced by ego-motion while preserving signals originating from moving objects. Following this preprocessing stage, the transformed event data were converted into an event queue representation, upon which denoising operations were performed. Ultimately, object detection was achieved through the application of image moment analysis to the processed event queue representation. The experimental results show that, compared with the current state-of-the-art methods, the proposed method has improved the detection speed by approximately 20% and the accuracy by approximately 5%. To substantiate real-world applicability, the authors implemented a complete obstacle avoidance pipeline, integrating our detector with planning modules and successfully deploying it on a custom-built quadrotor platform. Field tests confirm reliable avoidance of an obstacle approaching at approximately 8 m/s, thereby validating practical deployment potential.

Details

Title
A Low-Latency Dynamic Object Detection Algorithm Fusing Depth and Events
Author
Chen, Duowen 1 ; Zhou, Liqi 2 ; Guo, Chi 3 

 GNSS Research Center, Wuhan University, Wuhan 430079, China; [email protected]; School of Electronic Information, Wuhan University, Wuhan 430072, China 
 School of Computer Science, Wuhan University, Wuhan 430072, China; [email protected] 
 GNSS Research Center, Wuhan University, Wuhan 430079, China; [email protected]; Hubei Luojia Laboratory, Wuhan 430079, China 
First page
211
Publication year
2025
Publication date
2025
Publisher
MDPI AG
e-ISSN
2504446X
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3181428058
Copyright
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.