Closed hmt2014 closed 5 years ago
Hi
Did you test it on cpu?
I noticed this while I was testing using CPU. In my case, testing with GPU is fine.
If I remember correctly about my case, it happens in the beginning of generation where att_idx[[edges[:, 0]]]
is empty.
I believe it is caused by the inconsistency between the CPU and GPU implementations of scatter
in PyTorch.
Thanks :-D
Follow up question. If I do self.att_edge_dim = 16
instead of 64, then on the following line relative to the one mentioned https://github.com/lrjconan/GRAN/blob/43cb4433e6f69401c3a4a6e946ea75da6ec35d72/model/gran_mixture_bernoulli.py#L246-L247 I get this error
C:\w\b\windows\pytorch\aten\src\ATen\native\cuda\ScatterGatherKernel.cu:276: block: [13,0,0], thread: [32,0,0] Assertion `idx_dim >= 0 && idx_dim < index_size && "index out of bounds"` failed.
But I should warn you that this is with some other changes to the project. Still I'm curious if there's a reason att_edge_dim
is hard coded to 64? 32 and 64 work somehow. And also, does self.has_rand_feat work? Because I'm having trouble with that too.
Error when testing at code att_edge_feat = att_edge_feat.scatter(1, att_idx[[edges[:, 0]]], 1)