Closed njzjz closed 1 week ago
Our kernels assume the memory of input tensors is continuous. It seems the torch's autograd may return tensors with non-continuous memory. Need to call the contiguous method to ensure the memory is contiguous.
contiguous
xref: https://github.com/pytorch/pytorch/issues/64529#issuecomment-914917507
Our kernels assume the memory of input tensors is continuous. It seems the torch's autograd may return tensors with non-continuous memory. Need to call the
contiguous
method to ensure the memory is contiguous.