Closed QiyaoHuang closed 2 years ago
It did consumed high memory, thank you for the warning, we are about to pakage the binary DD output and release here.
High learning rate, version of env, bad init random value or some other reason may cause this. I tried NCI1 and NCI109 just now, work normlly in my machine. Maybe you can try other parameter setting, adding middle output, using gradient clipping.
https://pan.baidu.com/s/1rdYhypHCebxBknMrIzAubg password: eqlo Note that the feature and the subgraph has been split into the folder "features" and "subadj" to fit your memory problem, you can just change the code to run (you don't need load them all once, load specific graph when you need in train progress)
folds 1/10: 0%| | 2/1000 [04:11<33:48:52, 121.98s/it, k:0.80, loss: nan, best_acc:1.00, RL:0] folds 1/10: 0%| | 2/1000 [05:52<33:48:52, 121.98s/it, k:0.80, loss: nan, best_acc:1.00, RL:0] folds 1/10: 0%| | 2/1000 [05:52<33:48:52, 121.98s/it, k:0.80, loss: nan, best_acc:1.00, RL:0] folds 1/10: 0%| | 3/1000 [05:52<31:02:52, 112.11s/it, k:0.80, loss: nan, best_acc:1.00, RL:0]