KEYWORDS: Data privacy, Machine learning, Data modeling, Education and training, Stochastic processes, Reflection, Process modeling, Instrument modeling, Design, Computer security
In traditional federated learning, participants train models by sharing model data. However, most existing federated learning frameworks are based on a uniform privacy budget, which cannot meet personalized privacy requirements. Moreover, during the model aggregation process, few federated learning frameworks consider potential malicious attacks or data leakage risks among participants. To address these issues, this paper proposes a federated learning scheme based on personalized differential privacy and secret sharing (PDPSS-FL). This scheme provides personalized differential privacy protection for participants, adding personalized noise to the model to preserve their privacy. Secret sharing techniques are employed during model updates and parameter transmissions to ensure secure model updates in the presence of honest-but-curious servers. Experimental results demonstrate that the proposed scheme generates high-quality models while satisfying personalized privacy protection requirements in a secure environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.