It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Semi-structured environments are difficult for autonomous driving because there are numerous unknown obstacles in drivable area without lanes, and its width and curvature considerably change. In such environments, searching for a path on a real-time is difficult, and localization data are inaccurate, reducing path tracking accuracy. Instead, alternative methods that reactively avoid obstacles in real-time using candidate paths or an artificial potential field have been studied. However, these require heuristics to identify specific parameters for handling various environments and are vulnerable to inaccurate input data. To address these limitations, this study proposes a method in which a vehicle drives toward drivable area using vision and deep learning. The proposed imitation learning method learns the look-ahead point where the vehicle should reach on a vision-based occupancy grid map to obtain a safe policy with a clear state action pattern relationship. Furthermore, using this point, the data aggregation (DAgger) algorithm with weighted loss function is proposed, which imitates expert behavior more accurately, especially in unsafe or near-collision situations. Experimental results in actual semi-structured environments demonstrated the limitations of general model-based methods and the effectiveness of the proposed imitation learning method. Moreover, simulation experiments showed that DAgger with the weight obtains a safer policy than existing DAgger algorithms.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Seoul National University, Dynamic Robotic Systems (DYROS) Lab., Graduate School of Convergence Science and Technology, Seoul, Republic of Korea (GRID:grid.31501.36) (ISNI:0000 0004 0470 5905)
2 Seoul National University, Dynamic Robotic Systems (DYROS) Lab., Graduate School of Convergence Science and Technology, Seoul, Republic of Korea (GRID:grid.31501.36) (ISNI:0000 0004 0470 5905); Seoul National University, ASRI, RICS, Seoul, Republic of Korea (GRID:grid.31501.36) (ISNI:0000 0004 0470 5905); Advanced Institutes of Convergence Technology, Suwon, Republic of Korea (GRID:grid.410897.3) (ISNI:0000 0004 6405 8965)