np.arange() returns a Iterable[numpy.int64] result.
But in many torch APIs, only int type is well recieved.
In multi-gpu situation, the origin code will cause crash like the following:
File "/opt/conda/lib/python3.6/site-packages/torch/cuda/comm.py", line 157, in scatter
with torch.cuda.device(device), torch.cuda.stream(stream):
File "/opt/conda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 227, in __enter__
torch._C._cuda_setDevice(self.idx)
RuntimeError: invalid argument to setDevice
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 14, in scatter_map
return Scatter.apply(target_gpus, None, dim, obj)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 74, in forward
outputs = comm.scatter(input, ctx.target_gpus, ctx.chunk_sizes, ctx.dim, streams)
File "/opt/conda/lib/python3.6/site-packages/torch/cuda/comm.py", line 159, in scatter
outputs.append(chunk.cuda(device, non_blocking=True))
TypeError: cuda(): argument 'device' (position 1) must be torch.device, not numpy.int64
File "/opt/conda/lib/python3.6/site-packages/torch/cuda/comm.py", line 197, in gather
result = tensors[0].new(expected_size, device=destination)
TypeError: new() received an invalid combination of arguments - got (torch.Size, device=numpy.int64), but expected one of:
* (torch.device device)
* (tuple of ints size, torch.device device)
didn't match because some of the arguments have invalid types: (torch.Size, device=numpy.int64)
* (torch.Storage storage)
* (Tensor other)
* (object data, torch.device device)
didn't match because some of the arguments have invalid types: (torch.Size, device=numpy.int64)
np.arange() returns a Iterable[numpy.int64] result. But in many torch APIs, only int type is well recieved. In multi-gpu situation, the origin code will cause crash like the following: