Closed ShuhanChen closed 3 years ago
Hi @ShuhanChen,
Contiguous flag errors are related to PyTorch tensors not occupying single blocks of memory. They basically correspond to tensors that are both contiguous in memory and in the same memory order as its indices (an in-depth NumPy explanation can be found here). I have not yet added a check for this as it only arises in specific operations.
A simple solution would be to call .contiguous()
before the operation. e.g. :
# Dummy initialisation
x = torch.rand(1,3,224,224)
# Example of non-contiguous operation
x = torch.transpose(x, 0,1,3,2)
# Make contiguous tensor copy
x = x.contiguous()
# Some other operations...
Do note that .contiguous()
will create a copy of the tensor if it is not contiguous, so if this is a common accurance in your code you may see some increased memory usage.
Best, Alex
RuntimeError: input.is_contiguous() INTERNAL ASSERT FAILED at "CUDA/softpool_cuda.cpp":95, please report a bug to PyTorch. input must be a contiguous tensor. I met the above error, how to solve it? Thanks.