Content area
This thesis presents the design, implementation, and evaluation of a low-cost, real-world autonomous driving platform that leverages imitation learning and edge AI for end-to-end vehicle control. The system is constructed on a modified Traxxas RC car chassis and integrates an NVIDIA Jetson Orin Nano as the primary computation unit. A stereo RGB-D camera is employed to capture environmental observations, while synchronized pulse-width modulation (PWM) signals are recorded during expert teleoperation to serve as ground truth for training.
A behavior cloning framework based on a convolutional neural network (CNN) is developed to map raw stereo images to throttle and steering commands. The model is trained using time-aligned image-action pairs and deployed on the embedded platform for real-time inference. A fully integrated data logging and replay pipeline enables precise validation of control fidelity, facilitating trajectory-level evaluation.
Extensive experiments are conducted to assess system performance across three domains: offline prediction accuracy, consistency of replay signals, and real-time open-loop behavior. Failure case analysis highlights the challenges posed by dynamic lighting and distributional shift, motivating future research on data augmentation, feedback control, and multi-modal fusion.
The proposed platform offers a reproducible and extensible testbed for evaluating learning-based control algorithms in real-world settings, effectively bridging the gap between simulation and physical deployment. Its modular architecture, cost-efficiency, and empirical rigor make it well-suited for autonomous driving research, education, and rapid algorithm prototyping.