Content area
Unmanned aerial vehicles (UAVs) play an ever-increasing role in disaster response and remote sensing. However, the deep learning models they rely on remain highly vulnerable to adversarial attacks. This paper presents an evaluation and defense framework aimed at enhancing adversarial robustness in aerial disaster image classification using the AIDERV2 dataset. Our methodology is structured into the following four phases: (I) baseline training with clean data using ResNet-50, (II) vulnerability assessment under Projected Gradient Descent (PGD) attacks, (III) adversarial training with PGD to improve model resilience, and (IV) comprehensive post-defense evaluation under identical attack scenarios. The baseline model achieves 93.25% accuracy on clean data but drops to as low as 21.00% under strong adversarial perturbations. In contrast, the adversarially trained model maintains over 75.00% accuracy across all PGD configurations, reducing the attack success rate by more than 60%. We introduce metrics, such as Clean Accuracy, Adversarial Accuracy, Accuracy Drop, and Attack Success Rate, to evaluate defense performance. Our results show the practical importance of adversarial training for safety-critical UAV applications and provide a reference point for future research. This work contributes to making deep learning systems on aerial platforms more secure, robust, and reliable in mission-critical environments.
