Dissemin is shutting down on January 1st, 2025

Published in

Association for Computing Machinery (ACM), ACM Transactions on Intelligent Systems and Technology, 4(13), p. 1-20, 2022

DOI: 10.1145/3501812

Links

Tools

Export citation

Search in Google Scholar

Efficient Federated Matrix Factorization Against Inference Attacks

Journal article published in 2022 by Di Chai ORCID, Leye Wang ORCID, Kai Chen ORCID, Qiang Yang ORCID
This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Recommender systems typically require the revelation of users’ ratings to the recommender server, which will subsequently use these ratings to provide personalized services. However, such revelations make users vulnerable to a broader set of inference attacks, allowing the recommender server to learn users’ private attributes, e.g., age and gender. Therefore, in this paper, we propose an efficient federated matrix factorization method that protects users against inference attacks. The key idea is that we obfuscate one user’s rating to another such that the private attribute leakage is minimized under the given distortion budget, which bounds the recommending loss and overhead of system efficiency. During the obfuscation, we apply differential privacy to control the information leakage between the users. We also adopt homomorphic encryption to protect the intermediate results during training. Our framework is implemented and tested on real-world datasets. The result shows that our method can reduce up to 16.7% of inference attack accuracy compared to using no privacy protections.