dearleiii / PIRM-2018-SISR-Challenge

Super Resolution
https://www.pirm2018.org/PIRM-SR.html
2 stars 0 forks source link

TypeError: Broadcast function not implemented for CPU tensors #9

Closed dearleiii closed 6 years ago

dearleiii commented 6 years ago

leichen@gpu-compute1$ python3 scatter_edsr.py cuda.current_device= 0 DataParallel( (module): APXM_edsr( (main): Sequential( (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (1): LeakyReLU(negative_slope=0.2, inplace) (2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (3): LeakyReLU(negative_slope=0.2, inplace) (4): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False) (5): LeakyReLU(negative_slope=0.2, inplace) ) (regressor): Sequential( (0): Linear(in_features=8388608, out_features=512, bias=True) (1): LeakyReLU(negative_slope=0.01) (2): Linear(in_features=512, out_features=1, bias=True) ) ) ) ===== HYPERPARAMETERS ===== batch_size= 40 epochs= 5 learning_rate= 0.001

============================== Traceback (most recent call last): File "scatter_edsr.py", line 155, in trainNet(approximator, batch_size = 40, n_epochs = 5, learning_rate = 0.001) File "scatter_edsr.py", line 116, in trainNet outputs = net(inputs) File "/home/home2/leichen/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 491, in call result = self.forward(*input, *kwargs) File "/home/home2/leichen/.local/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 113, in forward replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) File "/home/home2/leichen/.local/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 118, in replicate return replicate(module, device_ids) File "/home/home2/leichen/.local/lib/python3.5/site-packages/torch/nn/parallel/replicate.py", line 12, in replicate param_copies = Broadcast.apply(devices, params) File "/home/home2/leichen/.local/lib/python3.5/site-packages/torch/nn/parallel/_functions.py", line 11, in forward raise TypeError('Broadcast function not implemented for CPU tensors') TypeError: Broadcast function not implemented for CPU tensors

dearleiii commented 6 years ago

You have some parameters which are not on the GPU. Try calling model.cuda()

dearleiii commented 6 years ago

pytorch/pytorch/blob/master/torch/nn/parallel/_functions.py#L6 class Broadcast(Function):

@staticmethod
def forward(ctx, target_gpus, *inputs):
    if not all(input.is_cuda for input in inputs):
        raise TypeError('Broadcast function not implemented for CPU tensors')
    ctx.target_gpus = target_gpus
    if len(inputs) == 0:
        return tuple()
    ctx.num_inputs = len(inputs)
    ctx.input_device = inputs[0].get_device()
    outputs = comm.broadcast_coalesced(inputs, ctx.target_gpus)
    non_differentiables = []
    for idx, input_requires_grad in enumerate(ctx.needs_input_grad[1:]):
        if not input_requires_grad:
            for output in outputs:
                non_differentiables.append(output[idx])
    ctx.mark_non_differentiable(*non_differentiables)
    return tuple([t for tensors in outputs for t in tensors])

@staticmethod
def backward(ctx, *grad_outputs):
    return (None,) + ReduceAddCoalesced.apply(ctx.input_device, ctx.num_inputs, *grad_outputs)
dearleiii commented 6 years ago

add model.to(device)