Noise Addition Strategies for Differential Privacy in Stochastic Gradient Descent
DOI:
https://doi.org/10.62051/f2kew975Keywords:
noise addition strategies; differential privacy; stochastic gradient descent.Abstract
Differential privacy technology is more and more widely used in the field of machine learning, especially in the gradient descent algorithm (SGD). Protecting data privacy by adding noise has become a hot topic of research. This paper reviews the noise addition strategy of differential privacy SGD from multiple dimensions, including adjustment based on noise distribution, adjustment based on gradient norm, adjustment based on privacy budget, and method based on model architecture. Each strategy has different performances in terms of privacy protection level, model performance loss and computational complexity. This article compares and analyzes these differences in detail, aiming to provide valuable reference for researchers and practitioners. This article also discusses how to combine federal learning and differential privacy technology to protect data privacy more efficiently in a secure multi-party computing (MPC) environment. Through the review of this article, we can see the wide application of differential privacy in machine learning and deep learning and its importance in the field of privacy protection. At the same time, we also show the direction and challenges of future research.
Downloads
References
T. Sander T, M. Sylvestre, A. Durmus Implicit Bias in Noisy-SGD: With Applications to Differentially Private Training. arXiv preprint arXiv:2402.08344, 2024.
A. Nikolov, H. Tang. General Gaussian Noise Mechanisms and Their Optimality for Unbiased Mean Estimation. arXiv preprint arXiv:2301.13850, 2023.
C. Amorino, A. Gloter, H. Halconruy. Evolving privacy: drift parameter estimation for discretely observed iid diffusion processes under L DP. arXiv preprint arXiv:2401.17829, 2024.
F. Koufogiannis, S. Han, G. Pappas Optimality of the laplace mechanism in differential privacy. arXiv preprint arXiv:1504.00065, 201 5, 10.
S. Roy, A. Tewari. On the Computational Complexity of Private High-dimensional Model Selection via the Exponential Mech anism. arXiv preprint arXiv:2310.07852, 2023.
X. Ling, J. Fu, Z. Chen, et al. FedFDP: Federated Learning with Fairness and Differential Privacy. arXiv preprint arXiv:2402.1602 8, 2024.
Z. Chu, J.He, D. Peng, et al. Differentially Private Denoise Diffusion Probability Model. IEEE Access, 2023.
X. Yang, W. Huang, M. Ye. Dynamic personalized federated learning with adaptive differential privacy. Advances in Neural Information Processing Systems, 2023, 36: 72181-72192.
Y. Zhou, X. Liu, Y. Fu , et al. Optimizing the Numbers of Queries and Replies in Federated Learning with Differential Privacy. ar Xiv preprint arXiv:2107.01895, 2021.
R. Hu, Y. Guo Y, Gong. Federated learning with sparsified model perturbation: Improving accuracy under client-level differe Ntial privacy. IEEE Transactions on Mobile Computing, 2023.
Y. Shi Y, Liu, K. Wei, et al. Make landscape flatter in differentially private federated learning, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 24552-24562.
M. Elfares M, Reisert P, Hu Z, et al. PrivatEyes: Appearance-based Gaze Estimation Using Federated Secure Multi-Party Computation. arXi V preprint arXiv:2402.18970, 2024.
P. Mohassel, Y. Zhang. Secureml: A system for scalable privacy-preserving machine learning, 2017 IEEE symposium on secur Ity and privacy (SP). IEEE, 2017: 19-38.
Downloads
Published
Conference Proceedings Volume
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.







