Content area

Abstract

The rapid advancement of Artificial Intelligence (AI), particularly Machine Learning (ML) and Deep Learning (DL), has produced high-performance models widely used in various applications, ranging from image recognition and chatbots to autonomous driving and smart grid systems. However, security threats arise from the vulnerabilities of ML models to adversarial attacks and data poisoning, posing risks such as system malfunctions and decision errors. Meanwhile, data privacy concerns arise, especially with personal data being used in model training, which can lead to data breaches. This paper surveys the Adversarial Machine Learning (AML) landscape in modern AI systems, while focusing on the dual aspects of robustness and privacy. Initially, we explore adversarial attacks and defenses using comprehensive taxonomies. Subsequently, we investigate robustness benchmarks alongside open-source AML technologies and software tools that ML system stakeholders can use to develop robust AI systems. Lastly, we delve into the landscape of AML in four industry fields –automotive, digital healthcare, electrical power and energy systems (EPES), and Large Language Model (LLM)-based Natural Language Processing (NLP) systems– analyzing attacks, defenses, and evaluation concepts, thereby offering a holistic view of the modern AI-reliant industry and promoting enhanced ML robustness and privacy preservation in the future.

Full text

Turn on search term navigation

© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.