Open xaionaro opened 6 years ago
danusya@werewolf:~/PyTorch-Multi-Style-Transfer/experiments$ python main.py train --dataset ~/dataset/ --vgg-model-dir caleido_vgg --save-model-dir caleido_model --epochs 2 --cuda 0 /usr/local/lib/python2.7/dist-packages/torchvision/transforms/transforms.py:156: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead. "please use transforms.Resize instead.") Net( (gram): GramMatrix( ) (model1): Sequential( (0): ConvLayer( (reflection_pad): ReflectionPad2d((3, 3, 3, 3)) (conv2d): Conv2d(3, 64, kernel_size=(7, 7), stride=(1, 1)) ) (1): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False) (2): ReLU(inplace) (3): Bottleneck( (residual_layer): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2)) (conv_block): Sequential( (0): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False) (1): ReLU(inplace) (2): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1)) (3): InstanceNorm2d(32, eps=1e-05, momentum=0.1, affine=False) (4): ReLU(inplace) (5): ConvLayer( (reflection_pad): ReflectionPad2d((1, 1, 1, 1)) (conv2d): Conv2d(32, 32, kernel_size=(3, 3), stride=(2, 2)) ) (6): InstanceNorm2d(32, eps=1e-05, momentum=0.1, affine=False) (7): ReLU(inplace) (8): Conv2d(32, 128, kernel_size=(1, 1), stride=(1, 1)) ) ) (4): Bottleneck( (residual_layer): Conv2d(128, 512, kernel_size=(1, 1), stride=(2, 2)) (conv_block): Sequential( (0): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False) (1): ReLU(inplace) (2): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1)) (3): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False) (4): ReLU(inplace) (5): ConvLayer( (reflection_pad): ReflectionPad2d((1, 1, 1, 1)) (conv2d): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2)) ) (6): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False) (7): ReLU(inplace) (8): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) ) ) ) (ins): Inspiration(N x 512) (model): Sequential( (0): Sequential( (0): ConvLayer( (reflection_pad): ReflectionPad2d((3, 3, 3, 3)) (conv2d): Conv2d(3, 64, kernel_size=(7, 7), stride=(1, 1)) ) (1): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False) (2): ReLU(inplace) (3): Bottleneck( (residual_layer): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2)) (conv_block): Sequential( (0): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False) (1): ReLU(inplace) (2): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1)) (3): InstanceNorm2d(32, eps=1e-05, momentum=0.1, affine=False) (4): ReLU(inplace) (5): ConvLayer( (reflection_pad): ReflectionPad2d((1, 1, 1, 1)) (conv2d): Conv2d(32, 32, kernel_size=(3, 3), stride=(2, 2)) ) (6): InstanceNorm2d(32, eps=1e-05, momentum=0.1, affine=False) (7): ReLU(inplace) (8): Conv2d(32, 128, kernel_size=(1, 1), stride=(1, 1)) ) ) (4): Bottleneck( (residual_layer): Conv2d(128, 512, kernel_size=(1, 1), stride=(2, 2)) (conv_block): Sequential( (0): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False) (1): ReLU(inplace) (2): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1)) (3): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False) (4): ReLU(inplace) (5): ConvLayer( (reflection_pad): ReflectionPad2d((1, 1, 1, 1)) (conv2d): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2)) ) (6): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False) (7): ReLU(inplace) (8): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) ) ) ) (1): Inspiration(N x 512) (2): Bottleneck( (conv_block): Sequential( (0): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False) (1): ReLU(inplace) (2): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) (3): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False) (4): ReLU(inplace) (5): ConvLayer( (reflection_pad): ReflectionPad2d((1, 1, 1, 1)) (conv2d): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1)) ) (6): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False) (7): ReLU(inplace) (8): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) ) ) (3): Bottleneck( (conv_block): Sequential( (0): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False) (1): ReLU(inplace) (2): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) (3): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False) (4): ReLU(inplace) (5): ConvLayer( (reflection_pad): ReflectionPad2d((1, 1, 1, 1)) (conv2d): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1)) ) (6): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False) (7): ReLU(inplace) (8): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) ) ) (4): Bottleneck( (conv_block): Sequential( (0): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False) (1): ReLU(inplace) (2): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) (3): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False) (4): ReLU(inplace) (5): ConvLayer( (reflection_pad): ReflectionPad2d((1, 1, 1, 1)) (conv2d): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1)) ) (6): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False) (7): ReLU(inplace) (8): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) ) ) (5): Bottleneck( (conv_block): Sequential( (0): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False) (1): ReLU(inplace) (2): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) (3): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False) (4): ReLU(inplace) (5): ConvLayer( (reflection_pad): ReflectionPad2d((1, 1, 1, 1)) (conv2d): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1)) ) (6): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False) (7): ReLU(inplace) (8): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) ) ) (6): Bottleneck( (conv_block): Sequential( (0): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False) (1): ReLU(inplace) (2): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) (3): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False) (4): ReLU(inplace) (5): ConvLayer( (reflection_pad): ReflectionPad2d((1, 1, 1, 1)) (conv2d): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1)) ) (6): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False) (7): ReLU(inplace) (8): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) ) ) (7): Bottleneck( (conv_block): Sequential( (0): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False) (1): ReLU(inplace) (2): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) (3): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False) (4): ReLU(inplace) (5): ConvLayer( (reflection_pad): ReflectionPad2d((1, 1, 1, 1)) (conv2d): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1)) ) (6): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False) (7): ReLU(inplace) (8): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1)) ) ) (8): UpBottleneck( (residual_layer): UpsampleConvLayer( (upsample_layer): Upsample(scale_factor=2, mode=nearest) (conv2d): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1)) ) (conv_block): Sequential( (0): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False) (1): ReLU(inplace) (2): Conv2d(512, 32, kernel_size=(1, 1), stride=(1, 1)) (3): InstanceNorm2d(32, eps=1e-05, momentum=0.1, affine=False) (4): ReLU(inplace) (5): UpsampleConvLayer( (upsample_layer): Upsample(scale_factor=2, mode=nearest) (reflection_pad): ReflectionPad2d((1, 1, 1, 1)) (conv2d): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1)) ) (6): InstanceNorm2d(32, eps=1e-05, momentum=0.1, affine=False) (7): ReLU(inplace) (8): Conv2d(32, 128, kernel_size=(1, 1), stride=(1, 1)) ) ) (9): UpBottleneck( (upsample_layer): Upsample(scale_factor=2, mode=nearest) (conv2d): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1)) ) (conv_block): Sequential( (0): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False) (1): ReLU(inplace) (2): Conv2d(128, 16, kernel_size=(1, 1), stride=(1, 1)) (3): InstanceNorm2d(16, eps=1e-05, momentum=0.1, affine=False) (4): ReLU(inplace) (5): UpsampleConvLayer( (upsample_layer): Upsample(scale_factor=2, mode=nearest) (reflection_pad): ReflectionPad2d((1, 1, 1, 1)) (conv2d): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1)) ) (6): InstanceNorm2d(16, eps=1e-05, momentum=0.1, affine=False) (7): ReLU(inplace) (8): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1)) ) ) (10): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False) (11): ReLU(inplace) (12): ConvLayer( (reflection_pad): ReflectionPad2d((3, 3, 3, 3)) (conv2d): Conv2d(64, 3, kernel_size=(7, 7), stride=(1, 1)) ) ) ) Traceback (most recent call last): File "main.py", line 287, in <module> main() File "main.py", line 40, in main train(args) File "main.py", line 159, in train style_model.setTarget(style_v) File "/home/danusya/PyTorch-Multi-Style-Transfer/experiments/net.py", line 293, in setTarget F = self.model1(Xs) File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 357, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/container.py", line 67, in forward input = module(input) File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 357, in __call__ result = self.forward(*input, **kwargs) File "/home/danusya/PyTorch-Multi-Style-Transfer/experiments/net.py", line 153, in forward out = self.conv2d(out) File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 357, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/conv.py", line 282, in forward self.padding, self.dilation, self.groups) File "/usr/local/lib/python2.7/dist-packages/torch/nn/functional.py", line 90, in conv2d return f(input, weight, bias) RuntimeError: Input type (CUDAFloatTensor) and weight type (CPUFloatTensor) should be the same
There seem to be a problem where some of the data is loaded on the GPU even with CUDA disabled. Did you test it successfully with CUDA ?
I got the same problem with pytorch=0.3.1 too. Have you found the solution?