Closed andrew7shen closed 11 months ago
Hi, we have set the random seed for torch and numpy in the downstream.py. There is probably some randomness in torch_scatter
, but it should not have large influence on the final embeddings.
https://github.com/DeepGraphLearning/GearNet/blob/7873e2e594234ab581a1119c6ce2f09800593e0e/script/downstream.py#L56-L66
Gotcha, thanks!
Hi!
I was wondering if there is any reason that the GearNetIEConv encoder would return variable embeddings for the same input file. I encountered this using my own data, but when I set a torch manual_seed, the embeddings became constant for the same input. And is this expected to have any effect on model performance?
Thanks for your help!