wujcan / SGL-TensorFlow

173 stars 42 forks source link

How to generate adversarial examples? #29

Open FinchNie opened 2 years ago

FinchNie commented 2 years ago

Hi, thanks for your great work.

I am confused about robustness to noisy interactions in this paper.

Towards this end, we contaminate the training set by adding a certain proportion of adversarial examples (i.e., 5%, 10%, 15%, 20% negative user-item interactions), while keeping the testing set unchanged.

I tried to sample from interations which don't show in train.txt and test.txt. But I didn't find too many differences between LightGCN and SGL. I wonder if it is proper to generate adversarial examples in this way.

Looking forward to your reply.

wujcan commented 2 years ago

Hi, thanks for your great work.

I am confused about robustness to noisy interactions in this paper.

Towards this end, we contaminate the training set by adding a certain proportion of adversarial examples (i.e., 5%, 10%, 15%, 20% negative user-item interactions), while keeping the testing set unchanged.

I tried to sample from interations which don't show in train.txt and test.txt. But I didn't find too many differences between LightGCN and SGL. I wonder if it is proper to generate adversarial examples in this way.

Looking forward to your reply.

I have uploaded the code (./add_noise.py) that generates contaminated training data, see the guidance here.