Inria Montpellier, St-Priest Campus, Building 5, Room 02/124
Machine Learning in Montpellier, Theory et Practice – Emmy Fang & Arielle Zhang
Machine Learning (ML) models typically benefit from access to large and diverse datasets across different parties. However, privacy concerns often prohibit direct data sharing or centralized data aggregation. To address this, various collaborative training frameworks have been proposed that enable joint model training while adhering to differential privacy (DP), the current golden standard privacy definition. In this talk, I will provide an overview of existing privacy-preserving collaborative ML frameworks, highlighting their core techniques and their limitations. To bridge this gap, we proposed noise sampling mechanism based on chained table lookups, which can be implemented inside a secure Multi-Party Computation protocol. Such a method is highly flexible and compatible with various DP mechanisms. Finally, I will demonstrate an application of the proposed secure noise sampling method in DP collaborative ML training. The empirical results show that our method leads to improved efficiency and model performance compared to distributed DP that needs to account for colluding clients

