Published in

Association for Computing Machinery (ACM), ACM Transactions on Information Systems, 4(41), p. 1-28, 2023

DOI: 10.1145/3572834

Links

Tools

Export citation

Search in Google Scholar

On the Vulnerability of Graph Learning-based Collaborative Filtering

This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Graph learning-based collaborative filtering (GLCF), which is built upon the message-passing mechanism of graph neural networks (GNNs), has received great recent attention and exhibited superior performance in recommender systems. However, although GNNs can be easily compromised by adversarial attacks as shown by the prior work, little attention has been paid to the vulnerability of GLCF. Questions like can GLCF models be just as easily fooled as GNNs remain largely unexplored. In this article, we propose to study the vulnerability of GLCF. Specifically, we first propose an adversarial attack against CLCF. Considering the unique challenges of attacking GLCF, we propose to adopt the greedy strategy in searching for the local optimal perturbations and design a reasonable attacking utility function to handle the non-differentiable ranking-oriented metrics. Next, we propose a defense to robustify GCLF. The defense is based on the observation that attacks usually introduce suspicious interactions into the graph to manipulate the message-passing process. We then propose to measure the suspicious score of each interaction and further reduce the message weight of suspicious interactions. We also give a theoretical guarantee of its robustness. Experimental results on three benchmark datasets show the effectiveness of both our attack and defense.