Content area
Precision agriculture has emerged as a vital solution to meet the food demands of the growing global population. However, the high upfront costs of sensors, data analytics tools, and automation often pose challenges for small-scale farms, limiting their ability to adopt these advanced practices. Cooperative Smart Farming (CSF) provides a practical solution to address the evolving needs of modern farming, making precision agriculture more accessible and affordable for small-scale farms. These cooperatives are formal enterprises collectively financed, managed, and operated by member farms, working together for shared benefits. As smart agriculture adoption grows, CSFs are poised to be essential in building a more sustainable, resilient, and profitable agriculture for all member farms. However, CSFs face increased cybersecurity risks as technology reliance grows. Cyber attacks on one farm can disrupt the entire network, threatening data integrity and decision-making. To resolve the issue, we first set up two independent smart farming testbeds incorporating various sensors commonly used in smart farming. We then launched different cyber attacks in each smart farm and collected two network datasets. Then, we proposed a CNN-Transformer-based network anomaly detection model, specifically designed for deployment at the edge. However, the edge-based network anomaly detection model can only detect cyber attacks on its edge and fails to detect any new cyber attacks on other smart farms within CSF. If smart farms do not share the anomaly profile with each other quickly, member farms remain unaware of novel zero-day attacks and continue to share data. This not only compromises the smart farm’s decision-making processes but also allows attackers to remotely control and exploit on-field sensors and devices, creating unsafe and unproductive farming environments.
To address this, we develop a federated learning-based anomaly detector that enables collaborative learning of network anomalies across multiple smart farms while preserving data privacy. To achieve faster model updates, we incorporated transfer learning and model compression technologies into the federated learning approach, a key requirement of our research problem. Additionally, we investigate the impact of adversarial attacks on FL-based anomaly detection systems in CSF. Our research indicates that adversarial attacks are more effective when the attacker targets the essential features of the dataset rather than randomly selected ones to craft adversarial samples. To counter these attacks, we implement a defense mechanism using the DistilBERT language model, which filters poisoned data through cosine similarity-based masking, thereby improving the system’s robustness against adversarial threats.