DEVA: Decentralized, Verifiable Secure Aggregation for Privacy-Preserving Learning

tl;dr: design of a protocol for the secure aggregation of a federated learning model in a distributed manner.

Paper: ISC 2021 or personal pdf.

Authors

Georgia Tsaloli, Bei Liang, Carlo Brunetta, Gustavo Banegas and Aikaterini Mitrokotsa

Abstract

Aggregating data from multiple sources is often required in multiple applications. In this paper, we introduce DEVA, a protocol that allows a distributed set of servers to perform secure and verifiable aggregation of multiple users’ secret data, while no communication between the users occurs. DEVA computes the sum of the users’ input and provides public verifiability, i.e., anyone can be convinced about the correctness of the aggregated sum computed from a threshold amount of servers. A direct application of the DEVA protocol is its employment in the machine learning setting, where the aggregation of multiple users’ parameters (used in the learning model), can be orchestrated by multiple servers, contrary to centralized solutions that rely on a single server. We prove the security and verifiability of the proposed protocol and evaluate its performance for the execution time and bandwidth, the verification execution, the communication cost, and the total bandwidth usage of the protocol. We compare our findings to the prior work, concluding that DEVA requires less communication cost for a big amount of users.

Multi-Party Computation Real World