wujcan / SGL-TensorFlow

173 stars 42 forks source link

Question about sec 4.3.3 #15

Closed hotchilipowder closed 2 years ago

hotchilipowder commented 2 years ago

Hello, I am working on SGL recently. Thank you for offering this project. In sec 4.3.3, it said that

we contaminate the training set by adding a certain proportion of adversarial examples (i.e., 5%, 10%, 15%, 20% negative user-item interactions), while keeping the testing set unchanged. Figure 6 shows the results on Yelp2018 and Amazon- Book datasets.

May I ask how to get the adversarial examples? Is it just to add random links?

I use amazon-book (keep the rating > 3), and add the interaction which score =1/2 as adversarial examples. But I don't find such performance degradation.

wujcan commented 2 years ago

Hello, I am working on SGL recently. Thank you for offering this project. In sec 4.3.3, it said that

we contaminate the training set by adding a certain proportion of adversarial examples (i.e., 5%, 10%, 15%, 20% negative user-item interactions), while keeping the testing set unchanged. Figure 6 shows the results on Yelp2018 and Amazon- Book datasets.

May I ask how to get the adversarial examples? Is it just to add random links?

I use amazon-book (keep the rating > 3), and add the interaction which score =1/2 as adversarial examples. But I don't find such performance degradation.

Yes, we just add random links from unobserved interaction space. Our goal here is adding some false positive samples into the training data so as to verify that SGL can improve the robustness of GCN models.

hotchilipowder commented 2 years ago

Thanks a lot!