Content area

Abstract

Learning-driven systems are increasingly being deployed in critical applications. However, their inherent unpredictability often results in unforeseen behavior, raising concerns about reliability. Formal verification techniques can address these issues by providing assurances about system behavior. This dissertation introduces the "verification in the learning loop" framework, a novel approach designed to improve the formally certified performance of learning-driven systems at training convergence. The framework tackles real-world challenges by combining formal verification with learning-based methods, enabling analysis of systems with neural network components, and adapting core algorithms to practical settings, such as networked systems.

First, we consider the problem of learning a safe neurosymbolic program, assuming the environment is known. We introduce Differentiable Symbolic Execution (DSE), an efficient framework that integrates verification signals into the machine learning process. DSE leverages symbolic execution to construct safety losses and employs a generalized REINFORCE estimator to backpropagate gradients through non-differentiable program operations. By synergizing learning algorithms with differentiable symbolic analysis, DSE achieves significantly safer neurosymbolic programs compared to existing approaches.

Next, we study safety guarantees for control tasks in reinforcement learning, extending our framework to unknown environments. While reasoning over environments provides richer properties, handling unknown environments introduces additional challenges. To address this, we propose CAROL—a model-based reinforcement learning framework that uses a learned model of the environment and an abstract interpreter to create a differentiable robustness signal. This signal enables the training of policies with provable adversarial robustness. Our experiments demonstrate that CAROL surpasses traditional reinforcement learning algorithms that do not incorporate verification into the learning loop.

Finally, we address the practical challenges of deploying learning-driven systems in real-world settings, where reliability is critical. As a case study, we focus on learning-based congestion control and introduce C3, a framework that optimizes network performance while providing formal performance guarantees. C3 integrates learning with formal verification through a novel property-driven training loop, enabling the controller to adapt to diverse network conditions without sacrificing worst-case reliability. This work demonstrates the feasibility of bridging learning and verification at scale, and highlights how the "verification in the loop" techniques can lead to robust, deployable systems in safety-critical domains.

Through these innovations, which integrate verification into the learning loop while addressing end-to-end system requirements, this dissertation advances the field of learning-directed systems by providing techniques that offer formal guarantees without compromising performance.

Details

1010268
Business indexing term
Title
Learning-Directed Systems with Safety and Robustness Certificates
Number of pages
177
Publication year
2025
Degree date
2025
School code
0227
Source
DAI-B 87/6(E), Dissertation Abstracts International
ISBN
9798270229504
Committee member
Akella, Aditya; Dillig, Isil; Solar-Lezama, Armando
University/institution
The University of Texas at Austin
Department
Computer Science
University location
United States -- Texas
Degree
Ph.D.
Source type
Dissertation or Thesis
Language
English
Document type
Dissertation/Thesis
Dissertation/thesis number
32458869
ProQuest document ID
3284362928
Document URL
https://www.proquest.com/dissertations-theses/learning-directed-systems-with-safety-robustness/docview/3284362928/se-2?accountid=208611
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.
Database
ProQuest One Academic