Low-Cost Privacy-Preserving Decentralized Learning

Authors: Sayan Biswas (EPFL), Davide Frey (Univ Rennes, Inria, CNRS, IRISA), Romaric Gaudel (Univ Rennes, Inria, CNRS, IRISA), Anne-Marie Kermarrec (EPFL), Dimitri Lerévérend (Univ Rennes, Inria, CNRS, IRISA), Rafael Pires (EPFL), Rishi Sharma (EPFL), François Taïani (Univ Rennes, Inria, CNRS, IRISA)

Volume: 2025
Issue: 3
Pages: 451–474
DOI: https://doi.org/10.56553/popets-2025-0108

Download PDF

Abstract: Decentralized learning (DL) is an emerging paradigm of collaborative machine learning that enables nodes in a network to train models collectively without sharing their raw data or relying on a central server. This paper introduces Zip-DL, a privacy-aware DL algorithm that leverages correlated noise to achieve robust privacy against local adversaries while ensuring efficient convergence at low communication costs. By progressively neutralizing the noise added during distributed averaging, Zip-DL combines strong privacy guarantees with high model accuracy. Its design requires only one communication round per gradient descent iteration, significantly reducing communication overhead compared to competitors. We establish theoretical bounds on both convergence speed and privacy guarantees. Moreover, extensive experiments demonstrating Zip-DL's practical applicability make it outperform state-of-the-art methods in the accuracy vs. vulnerability trade-off. Specifically, Zip-DL (i) reduces membership-inference attack success rates by up to 35% compared to baseline DL, (ii) decreases attack efficacy by up to 13% compared to competitors offering similar utility, and (iii) achieves up to 59% higher accuracy to completely nullify a basic attack scenario, compared to a state-of-the-art privacy-preserving approach under the same threat model. These results position Zip-DL as a practical and efficient solution for privacy-preserving decentralized learning in real-world applications.

Keywords: decentralized learning, differential privacy, correlated noises

Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution 4.0 license.