PetarV- / DGI

Deep Graph Infomax (https://arxiv.org/abs/1809.10341)
MIT License
630 stars 135 forks source link

Out of Memory on Pubmed Dataset #3

Closed Tiiiger closed 5 years ago

Tiiiger commented 5 years ago

I tried to run the released execute.py on Pubmed. However, it seems that it takes 19.25 GB during back propagation.

Is this the correct behavior? Is there any solution to bypass this problem and replicate the paper reported number?

PetarV- commented 5 years ago

Did you reduce the feature size to 256, as the paper reports?

Tiiiger commented 5 years ago

Sorry for being stupid; I didn't change it to be 256.

After I change the batch size to 256, the result I get on Pubmed is $78.47 \pm{0.66}$, much higher than the paper reported number. All I did is to copying the pubmed data into /data and changing the batchsize. I think you might want to check this and maybe update the reported number to be better.

Again, thank you for sharing the codes! It would also be very nice if you can share the implementation of DGI on Reddit (in Tensorflow or something; whatever works).

PetarV- commented 5 years ago

Is your result single-run, or averaged over multiple runs? I'm not ruling anything out, but it could always be due to PyTorch versions.

Tiiiger commented 5 years ago

This is averaged over 10 runs. And I am using pytorch 1.0.

PetarV- commented 5 years ago

You could've gotten lucky -- try 50, as described in the paper.

But yeah, it could well be PyTorch version, the one I've used for the experiments is 0.4.0. I might try to re-run with an upgraded stack, when my schedule clears up a little bit. :'(

In either case, thanks for taking the effort to re-run this experiment and notifying of the outcome!

Tiiiger commented 5 years ago

Great! I am closing this.