Full text

Turn on search term navigation

© 2021. This work is licensed under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Planar surfaces are prevalent components of man-made indoor scenes, and plane extraction plays a vital role in practical applications of computer vision and robotics, such as scene understanding, and mobile manipulation. Nowadays, most plane extraction methods are based on reconstruction of the scene. In this paper, plane representation is formulated in inverse-depth images. Based on this representation, we explored the potential to extract planes in images directly. A fast plane extraction approach, which employs the region growing algorithm in inverse-depth images, is presented. This approach consists of two main components: seeding, and region growing. In the seeding component, seeds are carefully selected locally in grid cells to improve exploration efficiency. After seeding, each seed begins to grow into a continuous plane in succession. Both greedy policy and a normal coherence check are employed to find boundaries accurately. During growth, neighbor coplanar planes are checked and merged to overcome the over-segmentation problem. Through experiments on public datasets and generated saw-tooth images, the proposed approach achieves 80.2% CDR (Correct Detection Rate) on the ABW SegComp Dataset, which has proven that it has comparable performance with the state-of-the-art. The proposed approach runs at 5 Hz on typical 680 × 480 images, which has shown its potential in real-time practical applications in computer vision and robotics with further improvement.

Details

Title
A Plane Extraction Approach in Inverse Depth Images Based on Region-Growing
First page
1141
Publication year
2021
Publication date
2021
Publisher
MDPI AG
e-ISSN
14248220
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2488069182
Copyright
© 2021. This work is licensed under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.