Full Text

Turn on search term navigation

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

In the context of predicting pedestrian trajectories for indoor mobile robots, it is crucial to accurately measure the distance between indoor pedestrians and robots. This study aims to address this requirement by extracting pedestrians as regions of interest and mitigating issues related to inaccurate depth camera distance measurements and illumination conditions. To tackle these challenges, we focus on an improved version of the H-GrabCut image segmentation algorithm, which involves four steps for segmenting indoor pedestrians. Firstly, we leverage the YOLO-V5 object recognition algorithm to construct detection nodes. Next, we propose an enhanced BIL-MSRCR algorithm to enhance the edge details of pedestrians. Finally, we optimize the clustering features of the GrabCut algorithm by incorporating two-dimensional entropy, UV component distance, and LBP texture feature values. The experimental results demonstrate that our algorithm achieves a segmentation accuracy of 97.13% in both the INRIA dataset and real-world tests, outperforming alternative methods in terms of sensitivity, missegmentation rate, and intersection-over-union metrics. These experiments confirm the feasibility and practicality of our approach. The aforementioned findings will be utilized in the preliminary processing of indoor mobile robot pedestrian trajectory prediction and enable path planning based on the predicted results.

Details

Title
An H-GrabCut Image Segmentation Algorithm for Indoor Pedestrian Background Removal
Author
Huang, Xuchao 1   VIAFID ORCID Logo  ; Wang, Shigang 1   VIAFID ORCID Logo  ; Gao, Xueshan 1 ; Luo, Dingji 2 ; Xu, Weiye 1 ; Pang, Huiqing 1 ; Zhou, Ming 3 

 School of Automation, Guangxi University of Science and Technology, Liuzhou 545000, China; [email protected] (X.H.); ; Key Laboratory of Intelligent Sensing and Control, Liuzhou 545000, China 
 Mechanical and Electrical College, Beijing Institute of Technology, Beijing 100190, China 
 Hangke Jinggong Co., Ltd., Beijing 102400, China 
First page
7937
Publication year
2023
Publication date
2023
Publisher
MDPI AG
e-ISSN
14248220
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2869630253
Copyright
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.