P2P Based Personalized Federated Learning for Collaborative Model Sharing and Inferencing
Abstract
Existing Federated Learning (FL) relies primarily on client-server architecture to train a single global model utilizing various local datasets. There are ongoing initiatives to improve the present state of federated learning. This method may not be suitable for clients with diverse requirements. For best outcomes, we need to explore distributed design, such as Peer-to-Peer (P2P), instead of the centralized client-server architecture of federated learning when constructing customized federated models.
In this thesis, we describe a P2P training and inference method that aims to produce good results. This design is intended to enhance the personalization and classification capabilities of network peers and clients. In addition, we leverage parallel processing to expedite model training and evaluation by separating IID/non-IID data into separate peers and clients; we then do evaluations and aggregations to get an improved outcome. Each client federates with other pertinent clients and peers to build a more robust model based on client-specific goals. This P2P FL framework allows clients to extract the model based on their knowledge of data. Our system assesses the performance of each client, as well as their peer group and the whole FL model. Experiments on the MNIST and CIFAR-10 datasets demonstrate that this P2P strategy generates more precise models than random client communication.
Table of Contents
Introduction -- Related work -- Proposed framework -- Results and evaluation -- Conclusion and future work
Degree
M.S. (Master of Science)