Content area

Abstract

Asimov's Three Laws of Robotics famously outlined fundamental safety principles governing human-robot interaction. This foundational concept of safety is paramount for today's autonomous systems, such as robots, which possess inherent cyber-physical properties. With the increasingly widespread application of autonomous systems in real-world environments, the challenges facing research on formal safety verification have grown even more significant. However, end-to-end verification of such complex, integrated systems remains an open and formidable challenge due to their high dimensionality, nonlinearity, and the use of learning-based components. This thesis approaches this challenge by pursuing verifiably safe autonomy from two complementary directions: (i) safe control of learning-enabled systems providing formal guarantees and (ii) resilient safe control that maintains formal safety guarantees under extreme scenarios such as sensor faults and cyber-physical attacks.

The first half of this dissertation presents the formal verification of autonomous systems that integrate learning-enabled components. It starts with the safety verification of neural control barrier functions (NCBF) employing Rectified Linear Unit (ReLU) activation functions. By leveraging a generalization of Nagumo's theorem, we propose exact safety conditions for deterministic systems. To manage computational complexity, we enhance the efficiency of verification and synthesis using a VNN-based (Verification of Neural Networks) search algorithm and a neural breadth-first search algorithm. We further propose the synthesis and verification of safe control for stochastic systems.

The second half of this dissertation broadens the scope of end-to-end verification by explicitly accounting for imperfections and perturbations. We first proposed Fault-Tolerant Stochastic CBFs and NCBFs to provide safety guarantees for autonomous systems under state estimation error caused by low-dimensional sensor faults and attacks. We then investigate the unique challenges posed by Light Detection And Ranging (LiDAR) perception attacks. We propose a fault detection, identification, and isolation mechanism for 2D and 3D LiDAR and provide safe control under attacks.

Details

1010268
Business indexing term
Title
Resilient Safe Control of Autonomous Systems
Number of pages
211
Publication year
2025
Degree date
2025
School code
0252
Source
DAI-B 87/1(E), Dissertation Abstracts International
ISBN
9798288853159
Committee member
Sinopoli, Bruno; Vorobeychik, Yevgeniy; Zhang, Ning; Kantaros, Ioannis; Zeng, Shen
University/institution
Washington University in St. Louis
Department
Electrical & Systems Engineering
University location
United States -- Missouri
Degree
Ph.D.
Source type
Dissertation or Thesis
Language
English
Document type
Dissertation/Thesis
Dissertation/thesis number
32120769
ProQuest document ID
3231760069
Document URL
https://www.proquest.com/dissertations-theses/resilient-safe-control-autonomous-systems/docview/3231760069/se-2?accountid=208611
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.
Database
ProQuest One Academic