Open FinchNie opened 2 years ago
Hi, thanks for your great work.
I am confused about robustness to noisy interactions in this paper.
Towards this end, we contaminate the training set by adding a certain proportion of adversarial examples (i.e., 5%, 10%, 15%, 20% negative user-item interactions), while keeping the testing set unchanged.
I tried to sample from interations which don't show in train.txt and test.txt. But I didn't find too many differences between LightGCN and SGL. I wonder if it is proper to generate adversarial examples in this way.
Looking forward to your reply.
I have uploaded the code (./add_noise.py) that generates contaminated training data, see the guidance here.
Hi, thanks for your great work.
I am confused about robustness to noisy interactions in this paper.
I tried to sample from interations which don't show in train.txt and test.txt. But I didn't find too many differences between LightGCN and SGL. I wonder if it is proper to generate adversarial examples in this way.
Looking forward to your reply.