Improving resilience in federated learning against data poisoning and network disruption at the edge
No Thumbnail Available
Authors
Meeting name
Sponsors
Date
Journal Title
Format
Thesis
Subject
Abstract
[EMBARGOED UNTIL 12/01/2026] The increasing deployment of drone swarms at the network edge benefit from the use of Federated Learning (FL), which enables decentralized data processing for visual situational awareness while preserving data privacy. However, FL-based drone swarm systems are vulnerable to data poisoning (e.g., label flipping, feature noise) and network disruption (e.g., distributed denial of service, location spoofing) attacks. There is a need to develop robust threat modeling and mitigation mechanisms to ensure operational efficiency and security of such systems. In this paper, we present a novel FL-Defend framework that features attack characterization and defense schemes to improve resilience of FL-based drone swarm systems against data poisoning and network disruption attacks in drone swarm edge systems. Specifically, we characterize the impact of data poisoning attacks and propose defense strategies such as differential privacy and adversarial training. In addition, we characterize network disruption attacks and incorporate defense strategies such as rate limiting and anycasting. We validate our proposed schemes in the AERPAWtestbed and show that adversarial training achieves up to 91.1% accuracy under data poisoning, with a tradeoff of increased CPU usage up to 233% in active drone scenarios. For network disruptions, our defense maintains > 95% model accuracy, reduces training delays from 426s to 180s, limits packet loss to < 12%, detects anomalies with 45 ms latency, and restores up to 89% throughput using rate limiting and 71% recovery via Anycast, at the cost of a 4.2 Gbps bandwidth overhead. These findings highlight critical trade-offs between performance and security.
Table of Contents
PubMed ID
Degree
M.S.
