lrjconan / GRAN

Efficient Graph Generation with Graph Recurrent Attention Networks, Deep Generative Model of Graphs, Graph Neural Networks, NeurIPS 2019
MIT License
462 stars 97 forks source link

Issue while Training on CPU #19

Open Ralfons-06 opened 2 years ago

Ralfons-06 commented 2 years ago

Hi, I just tried to train the model on a cpu but i ran into some Problems. While Training i always get the output message, that the loss at iteration x is 0 which seems kinda odd:

NLL Loss @ epoch 0001 iteration 00000001 = 0.0000 NLL Loss @ epoch 0063 iteration 00000250 = 0.0000

After going through the code of gran_runner i realized that the part of the code, where the loss is calculated is never called when there is no gpu available since batch_fwd is empty in that case:

https://github.com/lrjconan/GRAN/blob/43cb4433e6f69401c3a4a6e946ea75da6ec35d72/runner/gran_runner.py#L230-L259

Is this a bug or did i miss something?