Content area
Physical Unclonable Functions (PUFs) play a crucial role in enhancing the security of integrated circuits by exploiting unpredictable manufacturing variations to generate unique and unclonable Challenge Response Pairs (CRPs). They are increasingly being used in hardware-based authentication systems, providing a higher level of security assurance compared to traditional cryptographic techniques. Among the various types of PUFs, Ring Oscillator Physical Unclonable Functions (ROPUFs) have gained significant interest due to their efficiency in settings with limited resources, such as FPGAs. However, with the growing sophistication of machine learning (ML) models particularly neural networks and generative machine learning techniques concerns about the potential vulnerability of ROPUFs to predictive attacks have emerged.
This research investigates the vulnerability of FPGA-based Ring Oscillator PUFs (ROPUFs) to attacks using generative adversarial networks (GANs) and machine learning models. It focuses on scenarios where an adversary has access to just 5% of the Challenge Response Pair (CRP) dataset and employs a Generative Adversarial Network (GAN) to generate synthetic CRPs. This augmentation significantly increases the predictive accuracy of machine learning models, raising serious concerns about CRP security in hardware authentication.
Twelve machine learning algorithms were evaluated on 5 %, 10%, 40%, and 100% of the original CRP dataset. With only 10% of CRP data, Categorical Boosting (CB) and Extreme Gradient Boosting (XGB) have prediction accuracies of 80% and 79% respectively, while traditional models like Logistic Regression (LR) and Naive Bayes (NB) have prediction accuracies of 55% each. At 40% XGB, and CB achieved prediction accuracies of 87% each, which further improved with the full dataset. When only 5% of CRPs were available, XGB achieved 73.83% accuracy and Decision Tree (DT) achieved prediction accuracy at 60.17%. After augmenting with 20,000 GAN generated CRPs, all models improved, DT achieved an accuracy of 67.15%, and XGB achieved an accuracy of 83.41%. These findings emphasize how synthetic data can compromise ROPUF-based security and highlight the urgent need for stronger countermeasures.
The research not only highlights the risks posed by machine learning attacks but also emphasizes the importance of developing more secure PUF implementations that can resist data-driven attacks. With the growing reliance on FPGA-based systems for critical applications, such as defense and cryptographic modules, ensuring that ROPUFs remain secure against these emerging threats is essential. This work contributes to the existing body of knowledge by showing how adversaries can leverage advanced generative models to undermine hardware security features. It calls for further research into mitigation strategies that can enhance the resilience of PUFs against machine learning-based attacks, ensuring that these security mechanisms continue to serve as reliable protection in integrated circuits.