Closed LukeLIN-web closed 1 year ago
But I meet troubles. The current problem is https://github.com/snap-stanford/ogb/issues/425#issuecomment-1500817508.
out torch.Size([1024, 172])
target torch.Size([1024, 1])
Traceback (most recent call last):
File "neighborloader.py", line 98, in <module>
main()
File "neighborloader.py", line 91, in main
loss = F.cross_entropy(out, target.squeeze(1))
File "/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py", line 2996, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Float'
But I meet troubles. The current problem is #425 (comment).
out torch.Size([1024, 172]) target torch.Size([1024, 1]) Traceback (most recent call last): File "neighborloader.py", line 98, in <module> main() File "neighborloader.py", line 91, in main loss = F.cross_entropy(out, target.squeeze(1)) File "/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py", line 2996, in cross_entropy return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) RuntimeError: "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Float'
Fixed it using target.long()
Use PyG neighbor loader to train paper100M big datasets. I am not sure whether it should place in pyg repo or this repo.