Content area
Research over the last decade shows that machine learning (ML) models are vulnerable to adversarial manipulations. Particularly, input perturbations which are incomprehensible to humans, can force models to behave unexpectedly. However, existing research analyses these models in isolation, neglecting the broader system context typical of real-world deployments where an ML model is merely one component within a larger application. In two parts, this thesis investigates the security implications of this system-level perspective, exploring both the challenges and opportunities presented by the interplay between ML models and the surrounding environment. In the first half, we explore how to evaluate the security of ML systems. We highlight how existing methods fail in this setting, and provide new frameworks that can account for the components surrounding the ML model. We focus on techniques that can be integrated into existing evaluation methods, adapting them to be system-context aware. In the second half, we design robust ML systems. We provide systems where the non-ML components can compensate for the vulnerabilities of the ML model. This includes leveraging the surrounding software infrastructure and interaction protocols to create robust systems. Overall, this thesis takes a step towards a more systems approach to ML security.