Association for Computing Machinery (ACM), ACM Transactions on Intelligent Systems and Technology, 4(13), p. 1-20, 2022
DOI: 10.1145/3501812
Full text: Unavailable
Recommender systems typically require the revelation of users’ ratings to the recommender server, which will subsequently use these ratings to provide personalized services. However, such revelations make users vulnerable to a broader set of inference attacks, allowing the recommender server to learn users’ private attributes, e.g., age and gender. Therefore, in this paper, we propose an efficient federated matrix factorization method that protects users against inference attacks. The key idea is that we obfuscate one user’s rating to another such that the private attribute leakage is minimized under the given distortion budget, which bounds the recommending loss and overhead of system efficiency. During the obfuscation, we apply differential privacy to control the information leakage between the users. We also adopt homomorphic encryption to protect the intermediate results during training. Our framework is implemented and tested on real-world datasets. The result shows that our method can reduce up to 16.7% of inference attack accuracy compared to using no privacy protections.