Content area
The rapid advancement of Artificial Intelligence (AI), particularly Machine Learning (ML) and Deep Learning (DL), has produced high-performance models widely used in various applications, ranging from image recognition and chatbots to autonomous driving and smart grid systems. However, security threats arise from the vulnerabilities of ML models to adversarial attacks and data poisoning, posing risks such as system malfunctions and decision errors. Meanwhile, data privacy concerns arise, especially with personal data being used in model training, which can lead to data breaches. This paper surveys the Adversarial Machine Learning (AML) landscape in modern AI systems, while focusing on the dual aspects of robustness and privacy. Initially, we explore adversarial attacks and defenses using comprehensive taxonomies. Subsequently, we investigate robustness benchmarks alongside open-source AML technologies and software tools that ML system stakeholders can use to develop robust AI systems. Lastly, we delve into the landscape of AML in four industry fields –automotive, digital healthcare, electrical power and energy systems (EPES), and Large Language Model (LLM)-based Natural Language Processing (NLP) systems– analyzing attacks, defenses, and evaluation concepts, thereby offering a holistic view of the modern AI-reliant industry and promoting enhanced ML robustness and privacy preservation in the future.
Details
Large language models;
Artificial intelligence;
Privacy;
Taxonomy;
Image processing systems;
Smart grid;
Deep learning;
Natural language processing;
Software;
Robustness;
Errors;
Health services;
Poisoning;
Automobile industry;
Health care;
Human-computer interaction;
Data;
Landscape;
Preservation;
Language modeling
1 National Technical University of Athens, Decision Support Systems Laboratory, School of Electrical and Computer Engineering, Athens, Greece (GRID:grid.4241.3) (ISNI:0000 0001 2185 9808)
2 Superbo AI, Athens, Greece (GRID:grid.4241.3)