Closed juesato closed 7 years ago
it's doing that when a thrust call happens, i think. if you update to the latest cutorch/cunn, then you can get this commit that'll help you: https://github.com/torch/cunn/commit/1f8292f4f8334ab9e2433a0960607c780b29f848
Sorry for the slow turnaround. That indeed does fix the issue! Thanks.
I have a LookupTable layer that's allocating memory during random backwards passes (it happens ~once every 100 backwards passes).
I've narrowed the allocation down to this line:
but I'm not clear on what's going on inside the cuda kernel.
All the inputs to the layer should be the same type (torch.LongTensor) and shape.
Could somebody give me tips on what to look for?