tl;dr: design of a protocol for the secure aggregation of a federated learning model in a distributed manner.
Paper: ACISP 2021, ePrint or personal pdf.
Code: Original repo, GitHub repo or project page
Carlo Brunetta, Georgia Tsaloli, Bei Liang, Gustavo Banegas, Aikaterini Mitrokotsa
We propose a novel primitive called NIVA that allows the distributed aggregation of multiple users’ secret inputs by multiple untrusted servers. The returned aggregation result can be publicly verified in a non-interactive way, i.e. the users are not required to participate in the aggregation except for providing their secret inputs. NIVA allows the secure computation of the sum of a large amount of users’ data and can be employed, for example, in the federated learning setting in order to aggregate the model updates for a deep neural network. We implement NIVA and evaluate its communication and execution performance and compare it with the current state-of-the-art, i.e. Segal et al. protocol (CCS 2017) and Xu et al. VerifyNet protocol (IEEE TIFS 2020), resulting in better user’s communicated data and execution time.