Multi-Master Model: Better Way to Calculate the Final Weights

Authors

  • Bowen Ma

DOI:

https://doi.org/10.62051/k6me5s10

Keywords:

Multi-Master model; weights; federated learning.

Abstract

Under the current age in which data privacy and security become more and more important, as the short-coming of traditional distributed machine learning algorithms, the Federated Learning algorithm (FL), became more apparent. We need a better algorithm framework to handle more complex training scenarios. The focus of this paper is on multi-master Federated Learning, specifically on the last part of how to handle the weights from different master nodes. In this algorithm, instead of using only one central server to receive all weights calculated by the clients, we use multiple master nodes, each taking care of a portion of the data. There is no interaction between clients under different master nodes. Finally, the master nodes process their weights together to get a final weight. In this paper, the number of master nodes is 3, and there are 3 approaches we propose: 1) do a simple average of the weights. 2) do a weighted average. 3) apply the traditional Fedavg algorithm on these weights.  As a result, we found that for simple dataset, three approaches to get final weights in multi-master FL perform similarly and are no worse than traditional FedAvg, while for complex real-world data, approach 3 has a slightly better result.

Downloads

Download data is not yet available.

References

H. B.McMahan, , E. Moore, Ramage,. Communication-efficient learning of Deep Networks from Decentralized Data. arXiv.org. https://arxiv.org/abs/1602.05629 ,2023

S.,Bharati, M. R. H Mondal, P. Podder, Federated learning: Applications, challenges and Future Directions. arXiv.org. https://arxiv.org/abs/2205.09513 2022

Q. Li,,Y. Diao, Q. Chen, Federated learning on Non-IID Data Silos: An experimental study. arXiv.org. https://arxiv.org/abs/2102.02079 2021

T. Li,,A. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar). Federated optimization in Heterogeneous Networks. arXiv.org. https://arxiv.org/abs/1812.06127 2020.

S. Karimireddy, S. Kale, M. Mohri, S. Reddi, S. Stich,. Scaffold: Stochastic controlled averaging for Federated learning. arXiv.org. https://arxiv.org/abs/1910.06378 2021.

N. Bhuyan, S. Moharir, S. Multi-model federated learning. arXiv.org. https://arxiv.org/abs/2201.02582 2022

T. Sun, D.Li, B. Wang, Decentralized Federated averaging. arXiv.org. https://arxiv.org/abs/2104.11375 2021..

Y. Zhao, M. Li L.,N. Suda, N., Civin, D., & Chandra, V. (2022, July 21). Federated learning with non-IID data. arXiv.org. https://arxiv.org/abs/1806.00582

J. Hasan, Security and privacy issues of Federated Learning. arXiv.org. https://arxiv.org/abs/2307.12181

Z. Qu, X. Li, X., Xu, J., Tang, B., Lu, Z., & Liu, Y. On the convergence of Multi-Server Federated Learning with Overlapping Area. arXiv.org. https://arxiv.org/abs/2208.07893 2022

Downloads

Published

12-08-2024

How to Cite

Ma, B. (2024) “Multi-Master Model: Better Way to Calculate the Final Weights”, Transactions on Computer Science and Intelligent Systems Research, 5, pp. 947–952. doi:10.62051/k6me5s10.