Closed cszhangzhen closed 2 years ago
Hi,
Thanks for your interest! The perturbations are updated indeed due to the the random sampling from Guassian distribution. From my perspective, the positive samples are not necessarily from data augmentations [1]. On the other hand, we set the mean of Guassian distribution as zero and std of Guassian distribution as the std of the parameters of corresponding layer in case that it will alter the semantics of original graphs too much.
[1] SimCSE: Simple Contrastive Learning of Sentence Embeddings (https://arxiv.org/abs/2104.08821, EMNLP 2021)
Best, Jun.
Thanks for your reply.
I understand that the perturbations are "changed" during the training procedure, but they are not updated by gradient ascent and they are randomly sampled from Gaussian distribution. To be honest, this is not consistent with what you have claimed in the paper. And, I'm not quite sure whether it can guarantee the constraint $\Vert \mathbf{x}_i^{'} - \mathbf{x}_i \Vert \le \mathbf{\epsilon}$.
You mean AT-SimGRACE while I mean SimGRACE. In fact, we conduct adversarial perturbations in AT-SimGRACE. For SimGRACE, the random perturbations are sampled from Gaussian distribution. We encourage you to refer to our paper more throughly. The code of AT-SimGRACE can be found here: https://github.com/junxia97/SimGRACE/tree/main/adversarial_robustness.
Oh, I see. AT-SimGRACE is tested on synthetic data not the TU datasets. I made a mistake and think they are implemented in TU datasets. Thanks.
Hi,
Thanks for sharing your code.
After reading your code (simgrace.py in unsupervised TU), I found that the perturbations are not updated and they are just simply added into the GNN model's parameters. So in this case, how can you guarantee it is a positive sample? If I do not understand correctly, please correct me.