Content area
Deep neural networks (DNNs) have shown strong performance in synthetic aperture radar (SAR) image classification. However, their “black-box” nature limits interpretability and poses challenges for robustness, which is critical for sensitive applications such as disaster assessment, environmental monitoring, and agricultural insurance. This study systematically evaluates the adversarial robustness of five representative DNNs (VGG11/16, ResNet18/101, and A-ConvNet) under a variety of attack and defense settings. Using eXplainable AI (XAI) techniques and attribution-based visualizations, we analyze how adversarial perturbations and adversarial training affect model behavior and decision logic. Our results reveal significant robustness differences across architectures, highlight interpretability limitations, and suggest practical guidelines for building more robust SAR classification systems. We also discuss challenges associated with large-scale, multi-class land use and land cover (LULC) classification under adversarial conditions.
Details
Accuracy;
Classification systems;
Datasets;
Deep learning;
Land use;
Network reliability;
Artificial neural networks;
Synthetic aperture radar;
Sensors;
Neural networks;
Classification;
Image classification;
Methods;
Radar imaging;
Land cover;
Explainable artificial intelligence;
Robustness
; Zhang Limeng 1
; Guo Weiwei 2
; Zhang Zenghui 1
; Datcu Mihai 3
1 Shanghai Key Laboratory of Intelligent Sensing and Recognition, Shanghai Jiao Tong University, Shanghai 200240, China; [email protected] (T.C.); [email protected] (L.Z.)
2 Center of Digital Innovation, Tongji University, Shanghai 200092, China; [email protected]
3 Research Center for Spatial Information (CEOSpaceTech), POLITEHNICA Bucharest, Bucharest 011061, Romania; [email protected]