vt-vl-lab / 3d-photo-inpainting

[CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting
https://shihmengli.github.io/3D-Photo-Inpainting/
Other
6.91k stars 1.11k forks source link

RuntimeError: Error(s) in loading state_dict for MonoDepthNet: Missing key(s) in state_dict #79

Closed xadnem closed 4 years ago

xadnem commented 4 years ago

I don' know why it's not working. I would appreciate it if anyone who knows the reason would answer.

(3DP) C:\Users\windows10\Desktop\3d\3d-photo-inpainting-master\3d-photo-inpainting-master>python main.py --config argument.yml running on device 0 0%| | 0/1 [00:00<?, ?it/s]Current Source ==> pigeon Running depth extraction at 1595421246.9266007 initialize device: cpu 0%| | 0/1 [00:13<?, ?it/s] Traceback (most recent call last): File "main.py", line 54, in config['MiDaS_model_ckpt'], MonoDepthNet, MiDaS_utils, target_w=640) File "C:\Users\windows10\Desktop\3d\3d-photo-inpainting-master\3d-photo-inpainting-master\MiDaS\run.py", line 29, in run_depth model = Net(model_path) File "C:\Users\windows10\Desktop\3d\3d-photo-inpainting-master\3d-photo-inpainting-master\MiDaS\monodepth_net.py", line 52, in init self.load(path) File "C:\Users\windows10\Desktop\3d\3d-photo-inpainting-master\3d-photo-inpainting-master\MiDaS\monodepth_net.py", line 90, in load self.load_state_dict(parameters) File "C:\Users\windows10\anaconda3\envs\3DP\lib\site-packages\torch\nn\modules\module.py", line 830, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for MonoDepthNet: Missing key(s) in state_dict: "scratch.refinenet4.resConfUnit.conv1.weight", "scratch.refinenet4.resConfUnit.conv1.bias", "scratch.refinenet4.resConfUnit.conv2.weight", "scratch.refinenet3.resConfUnit.conv1.weight", "scratch.refinenet3.resConfUnit.conv1.bias", "scratch.refinenet3.resConfUnit.conv2.weight", "scratch.refinenet2.resConfUnit.conv1.weight", "scratch.refinenet2.resConfUnit.conv1.bias", "scratch.refinenet2.resConfUnit.conv2.weight", "scratch.refinenet1.resConfUnit.conv1.weight", "scratch.refinenet1.resConfUnit.conv1.bias", "scratch.refinenet1.resConfUnit.conv2.weight", "scratch.output_conv.1.weight", "scratch.output_conv.1.bias". Unexpected key(s) in state_dict: "pretrained.layer3.6.conv1.weight", "pretrained.layer3.6.bn1.weight", "pretrained.layer3.6.bn1.bias", "pretrained.layer3.6.bn1.running_mean", "pretrained.layer3.6.bn1.running_var", "pretrained.layer3.6.bn1.num_batches_tracked", "pretrained.layer3.6.conv2.weight", "pretrained.layer3.6.bn2.weight", "pretrained.layer3.6.bn2.bias", "pretrained.layer3.6.bn2.running_mean", "pretrained.layer3.6.bn2.running_var", "pretrained.layer3.6.bn2.num_batches_tracked", "pretrained.layer3.6.conv3.weight", "pretrained.layer3.6.bn3.weight", "pretrained.layer3.6.bn3.bias", "pretrained.layer3.6.bn3.running_mean", "pretrained.layer3.6.bn3.running_var", "pretrained.layer3.6.bn3.num_batches_tracked", "pretrained.layer3.7.conv1.weight", "pretrained.layer3.7.bn1.weight", "pretrained.layer3.7.bn1.bias", "pretrained.layer3.7.bn1.running_mean", "pretrained.layer3.7.bn1.running_var", "pretrained.layer3.7.bn1.num_batches_tracked", "pretrained.layer3.7.conv2.weight", "pretrained.layer3.7.bn2.weight", "pretrained.layer3.7.bn2.bias", "pretrained.layer3.7.bn2.running_mean", "pretrained.layer3.7.bn2.running_var", "pretrained.layer3.7.bn2.num_batches_tracked", "pretrained.layer3.7.conv3.weight", "pretrained.layer3.7.bn3.weight", "pretrained.layer3.7.bn3.bias", "pretrained.layer3.7.bn3.running_mean", "pretrained.layer3.7.bn3.running_var", "pretrained.layer3.7.bn3.num_batches_tracked", "pretrained.layer3.8.conv1.weight", "pretrained.layer3.8.bn1.weight", "pretrained.layer3.8.bn1.bias", "pretrained.layer3.8.bn1.running_mean", "pretrained.layer3.8.bn1.running_var", "pretrained.layer3.8.bn1.num_batches_tracked", "pretrained.layer3.8.conv2.weight", "pretrained.layer3.8.bn2.weight", "pretrained.layer3.8.bn2.bias", "pretrained.layer3.8.bn2.running_mean", "pretrained.layer3.8.bn2.running_var", "pretrained.layer3.8.bn2.num_batches_tracked", "pretrained.layer3.8.conv3.weight", "pretrained.layer3.8.bn3.weight", "pretrained.layer3.8.bn3.bias", "pretrained.layer3.8.bn3.running_mean", "pretrained.layer3.8.bn3.running_var", "pretrained.layer3.8.bn3.num_batches_tracked", "pretrained.layer3.9.conv1.weight", "pretrained.layer3.9.bn1.weight", "pretrained.layer3.9.bn1.bias", "pretrained.layer3.9.bn1.running_mean", "pretrained.layer3.9.bn1.running_var", "pretrained.layer3.9.bn1.num_batches_tracked", "pretrained.layer3.9.conv2.weight", "pretrained.layer3.9.bn2.weight", "pretrained.layer3.9.bn2.bias", "pretrained.layer3.9.bn2.running_mean", "pretrained.layer3.9.bn2.running_var", "pretrained.layer3.9.bn2.num_batches_tracked", "pretrained.layer3.9.conv3.weight", "pretrained.layer3.9.bn3.weight", "pretrained.layer3.9.bn3.bias", "pretrained.layer3.9.bn3.running_mean", "pretrained.layer3.9.bn3.running_var", "pretrained.layer3.9.bn3.num_batches_tracked", "pretrained.layer3.10.conv1.weight", "pretrained.layer3.10.bn1.weight", "pretrained.layer3.10.bn1.bias", "pretrained.layer3.10.bn1.running_mean", "pretrained.layer3.10.bn1.running_var", "pretrained.layer3.10.bn1.num_batches_tracked", "pretrained.layer3.10.conv2.weight", "pretrained.layer3.10.bn2.weight", "pretrained.layer3.10.bn2.bias", "pretrained.layer3.10.bn2.running_mean", "pretrained.layer3.10.bn2.running_var", "pretrained.layer3.10.bn2.num_batches_tracked", "pretrained.layer3.10.conv3.weight", "pretrained.layer3.10.bn3.weight", "pretrained.layer3.10.bn3.bias", "pretrained.layer3.10.bn3.running_mean", "pretrained.layer3.10.bn3.running_var", "pretrained.layer3.10.bn3.num_batches_tracked", "pretrained.layer3.11.conv1.weight", "pretrained.layer3.11.bn1.weight", "pretrained.layer3.11.bn1.bias", "pretrained.layer3.11.bn1.running_mean", "pretrained.layer3.11.bn1.running_var", "pretrained.layer3.11.bn1.num_batches_tracked", "pretrained.layer3.11.conv2.weight", "pretrained.layer3.11.bn2.weight", "pretrained.layer3.11.bn2.bias", "pretrained.layer3.11.bn2.running_mean", "pretrained.layer3.11.bn2.running_var", "pretrained.layer3.11.bn2.num_batches_tracked", "pretrained.layer3.11.conv3.weight", "pretrained.layer3.11.bn3.weight", "pretrained.layer3.11.bn3.bias", "pretrained.layer3.11.bn3.running_mean", "pretrained.layer3.11.bn3.running_var", "pretrained.layer3.11.bn3.num_batches_tracked", "pretrained.layer3.12.conv1.weight", "pretrained.layer3.12.bn1.weight", "pretrained.layer3.12.bn1.bias", "pretrained.layer3.12.bn1.running_mean", "pretrained.layer3.12.bn1.running_var", "pretrained.layer3.12.bn1.num_batches_tracked", "pretrained.layer3.12.conv2.weight", "pretrained.layer3.12.bn2.weight", "pretrained.layer3.12.bn2.bias", "pretrained.layer3.12.bn2.running_mean", "pretrained.layer3.12.bn2.running_var", "pretrained.layer3.12.bn2.num_batches_tracked", "pretrained.layer3.12.conv3.weight", "pretrained.layer3.12.bn3.weight", "pretrained.layer3.12.bn3.bias", "pretrained.layer3.12.bn3.running_mean", "pretrained.layer3.12.bn3.running_var", "pretrained.layer3.12.bn3.num_batches_tracked", "pretrained.layer3.13.conv1.weight", "pretrained.layer3.13.bn1.weight", "pretrained.layer3.13.bn1.bias", "pretrained.layer3.13.bn1.running_mean", "pretrained.layer3.13.bn1.running_var", "pretrained.layer3.13.bn1.num_batches_tracked", "pretrained.layer3.13.conv2.weight", "pretrained.layer3.13.bn2.weight", "pretrained.layer3.13.bn2.bias", "pretrained.layer3.13.bn2.running_mean", "pretrained.layer3.13.bn2.running_var", "pretrained.layer3.13.bn2.num_batches_tracked", "pretrained.layer3.13.conv3.weight", "pretrained.layer3.13.bn3.weight", "pretrained.layer3.13.bn3.bias", "pretrained.layer3.13.bn3.running_mean", "pretrained.layer3.13.bn3.running_var", "pretrained.layer3.13.bn3.num_batches_tracked", "pretrained.layer3.14.conv1.weight", "pretrained.layer3.14.bn1.weight", "pretrained.layer3.14.bn1.bias", "pretrained.layer3.14.bn1.running_mean", "pretrained.layer3.14.bn1.running_var", "pretrained.layer3.14.bn1.num_batches_tracked", "pretrained.layer3.14.conv2.weight", "pretrained.layer3.14.bn2.weight", "pretrained.layer3.14.bn2.bias", "pretrained.layer3.14.bn2.running_mean", "pretrained.layer3.14.bn2.running_var", "pretrained.layer3.14.bn2.num_batches_tracked", "pretrained.layer3.14.conv3.weight", "pretrained.layer3.14.bn3.weight", "pretrained.layer3.14.bn3.bias", "pretrained.layer3.14.bn3.running_mean", "pretrained.layer3.14.bn3.running_var", "pretrained.layer3.14.bn3.num_batches_tracked", "pretrained.layer3.15.conv1.weight", "pretrained.layer3.15.bn1.weight", "pretrained.layer3.15.bn1.bias", "pretrained.layer3.15.bn1.running_mean", "pretrained.layer3.15.bn1.running_var", "pretrained.layer3.15.bn1.num_batches_tracked", "pretrained.layer3.15.conv2.weight", "pretrained.layer3.15.bn2.weight", "pretrained.layer3.15.bn2.bias", "pretrained.layer3.15.bn2.running_mean", "pretrained.layer3.15.bn2.running_var", "pretrained.layer3.15.bn2.num_batches_tracked", "pretrained.layer3.15.conv3.weight", "pretrained.layer3.15.bn3.weight", "pretrained.layer3.15.bn3.bias", "pretrained.layer3.15.bn3.running_mean", "pretrained.layer3.15.bn3.running_var", "pretrained.layer3.15.bn3.num_batches_tracked", "pretrained.layer3.16.conv1.weight", "pretrained.layer3.16.bn1.weight", "pretrained.layer3.16.bn1.bias", "pretrained.layer3.16.bn1.running_mean", "pretrained.layer3.16.bn1.running_var", "pretrained.layer3.16.bn1.num_batches_tracked", "pretrained.layer3.16.conv2.weight", "pretrained.layer3.16.bn2.weight", "pretrained.layer3.16.bn2.bias", "pretrained.layer3.16.bn2.running_mean", "pretrained.layer3.16.bn2.running_var", "pretrained.layer3.16.bn2.num_batches_tracked", "pretrained.layer3.16.conv3.weight", "pretrained.layer3.16.bn3.weight", "pretrained.layer3.16.bn3.bias", "pretrained.layer3.16.bn3.running_mean", "pretrained.layer3.16.bn3.running_var", "pretrained.layer3.16.bn3.num_batches_tracked", "pretrained.layer3.17.conv1.weight", "pretrained.layer3.17.bn1.weight", "pretrained.layer3.17.bn1.bias", "pretrained.layer3.17.bn1.running_mean", "pretrained.layer3.17.bn1.running_var", "pretrained.layer3.17.bn1.num_batches_tracked", "pretrained.layer3.17.conv2.weight", "pretrained.layer3.17.bn2.weight", "pretrained.layer3.17.bn2.bias", "pretrained.layer3.17.bn2.running_mean", "pretrained.layer3.17.bn2.running_var", "pretrained.layer3.17.bn2.num_batches_tracked", "pretrained.layer3.17.conv3.weight", "pretrained.layer3.17.bn3.weight", "pretrained.layer3.17.bn3.bias", "pretrained.layer3.17.bn3.running_mean", "pretrained.layer3.17.bn3.running_var", "pretrained.layer3.17.bn3.num_batches_tracked", "pretrained.layer3.18.conv1.weight", "pretrained.layer3.18.bn1.weight", "pretrained.layer3.18.bn1.bias", "pretrained.layer3.18.bn1.running_mean", "pretrained.layer3.18.bn1.running_var", "pretrained.layer3.18.bn1.num_batches_tracked", "pretrained.layer3.18.conv2.weight", "pretrained.layer3.18.bn2.weight", "pretrained.layer3.18.bn2.bias", "pretrained.layer3.18.bn2.running_mean", "pretrained.layer3.18.bn2.running_var", "pretrained.layer3.18.bn2.num_batches_tracked", "pretrained.layer3.18.conv3.weight", "pretrained.layer3.18.bn3.weight", "pretrained.layer3.18.bn3.bias", "pretrained.layer3.18.bn3.running_mean", "pretrained.layer3.18.bn3.running_var", "pretrained.layer3.18.bn3.num_batches_tracked", "pretrained.layer3.19.conv1.weight", "pretrained.layer3.19.bn1.weight", "pretrained.layer3.19.bn1.bias", "pretrained.layer3.19.bn1.running_mean", "pretrained.layer3.19.bn1.running_var", "pretrained.layer3.19.bn1.num_batches_tracked", "pretrained.layer3.19.conv2.weight", "pretrained.layer3.19.bn2.weight", "pretrained.layer3.19.bn2.bias", "pretrained.layer3.19.bn2.running_mean", "pretrained.layer3.19.bn2.running_var", "pretrained.layer3.19.bn2.num_batches_tracked", "pretrained.layer3.19.conv3.weight", "pretrained.layer3.19.bn3.weight", "pretrained.layer3.19.bn3.bias", "pretrained.layer3.19.bn3.running_mean", "pretrained.layer3.19.bn3.running_var", "pretrained.layer3.19.bn3.num_batches_tracked", "pretrained.layer3.20.conv1.weight", "pretrained.layer3.20.bn1.weight", "pretrained.layer3.20.bn1.bias", "pretrained.layer3.20.bn1.running_mean", "pretrained.layer3.20.bn1.running_var", "pretrained.layer3.20.bn1.num_batches_tracked", "pretrained.layer3.20.conv2.weight", "pretrained.layer3.20.bn2.weight", "pretrained.layer3.20.bn2.bias", "pretrained.layer3.20.bn2.running_mean", "pretrained.layer3.20.bn2.running_var", "pretrained.layer3.20.bn2.num_batches_tracked", "pretrained.layer3.20.conv3.weight", "pretrained.layer3.20.bn3.weight", "pretrained.layer3.20.bn3.bias", "pretrained.layer3.20.bn3.running_mean", "pretrained.layer3.20.bn3.running_var", "pretrained.layer3.20.bn3.num_batches_tracked", "pretrained.layer3.21.conv1.weight", "pretrained.layer3.21.bn1.weight", "pretrained.layer3.21.bn1.bias", "pretrained.layer3.21.bn1.running_mean", "pretrained.layer3.21.bn1.running_var", "pretrained.layer3.21.bn1.num_batches_tracked", "pretrained.layer3.21.conv2.weight", "pretrained.layer3.21.bn2.weight", "pretrained.layer3.21.bn2.bias", "pretrained.layer3.21.bn2.running_mean", "pretrained.layer3.21.bn2.running_var", "pretrained.layer3.21.bn2.num_batches_tracked", "pretrained.layer3.21.conv3.weight", "pretrained.layer3.21.bn3.weight", "pretrained.layer3.21.bn3.bias", "pretrained.layer3.21.bn3.running_mean", "pretrained.layer3.21.bn3.running_var", "pretrained.layer3.21.bn3.num_batches_tracked", "pretrained.layer3.22.conv1.weight", "pretrained.layer3.22.bn1.weight", "pretrained.layer3.22.bn1.bias", "pretrained.layer3.22.bn1.running_mean", "pretrained.layer3.22.bn1.running_var", "pretrained.layer3.22.bn1.num_batches_tracked", "pretrained.layer3.22.conv2.weight", "pretrained.layer3.22.bn2.weight", "pretrained.layer3.22.bn2.bias", "pretrained.layer3.22.bn2.running_mean", "pretrained.layer3.22.bn2.running_var", "pretrained.layer3.22.bn2.num_batches_tracked", "pretrained.layer3.22.conv3.weight", "pretrained.layer3.22.bn3.weight", "pretrained.layer3.22.bn3.bias", "pretrained.layer3.22.bn3.running_mean", "pretrained.layer3.22.bn3.running_var", "pretrained.layer3.22.bn3.num_batches_tracked", "scratch.refinenet4.resConfUnit1.conv1.weight", "scratch.refinenet4.resConfUnit1.conv1.bias", "scratch.refinenet4.resConfUnit1.conv2.weight", "scratch.refinenet4.resConfUnit1.conv2.bias", "scratch.refinenet4.resConfUnit2.conv1.weight", "scratch.refinenet4.resConfUnit2.conv1.bias", "scratch.refinenet4.resConfUnit2.conv2.weight", "scratch.refinenet4.resConfUnit2.conv2.bias", "scratch.refinenet3.resConfUnit1.conv1.weight", "scratch.refinenet3.resConfUnit1.conv1.bias", "scratch.refinenet3.resConfUnit1.conv2.weight", "scratch.refinenet3.resConfUnit1.conv2.bias", "scratch.refinenet3.resConfUnit2.conv1.weight", "scratch.refinenet3.resConfUnit2.conv1.bias", "scratch.refinenet3.resConfUnit2.conv2.weight", "scratch.refinenet3.resConfUnit2.conv2.bias", "scratch.refinenet2.resConfUnit1.conv1.weight", "scratch.refinenet2.resConfUnit1.conv1.bias", "scratch.refinenet2.resConfUnit1.conv2.weight", "scratch.refinenet2.resConfUnit1.conv2.bias", "scratch.refinenet2.resConfUnit2.conv1.weight", "scratch.refinenet2.resConfUnit2.conv1.bias", "scratch.refinenet2.resConfUnit2.conv2.weight", "scratch.refinenet2.resConfUnit2.conv2.bias", "scratch.refinenet1.resConfUnit1.conv1.weight", "scratch.refinenet1.resConfUnit1.conv1.bias", "scratch.refinenet1.resConfUnit1.conv2.weight", "scratch.refinenet1.resConfUnit1.conv2.bias", "scratch.refinenet1.resConfUnit2.conv1.weight", "scratch.refinenet1.resConfUnit2.conv1.bias", "scratch.refinenet1.resConfUnit2.conv2.weight", "scratch.refinenet1.resConfUnit2.conv2.bias", "scratch.output_conv.4.weight", "scratch.output_conv.4.bias", "scratch.output_conv.2.weight", "scratch.output_conv.2.bias". size mismatch for pretrained.layer1.4.0.conv1.weight: copying a param with shape torch.Size([256, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 1, 1]). size mismatch for pretrained.layer1.4.0.bn1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.0.bn1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.0.bn1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.0.bn1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.0.conv2.weight: copying a param with shape torch.Size([256, 8, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for pretrained.layer1.4.0.bn2.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.0.bn2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.0.bn2.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.0.bn2.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.0.conv3.weight: copying a param with shape torch.Size([256, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 64, 1, 1]). size mismatch for pretrained.layer1.4.1.conv1.weight: copying a param with shape torch.Size([256, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 256, 1, 1]). size mismatch for pretrained.layer1.4.1.bn1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.1.bn1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.1.bn1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.1.bn1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.1.conv2.weight: copying a param with shape torch.Size([256, 8, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for pretrained.layer1.4.1.bn2.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.1.bn2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.1.bn2.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.1.bn2.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.1.conv3.weight: copying a param with shape torch.Size([256, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 64, 1, 1]). size mismatch for pretrained.layer1.4.2.conv1.weight: copying a param with shape torch.Size([256, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 256, 1, 1]). size mismatch for pretrained.layer1.4.2.bn1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.2.bn1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.2.bn1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.2.bn1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.2.conv2.weight: copying a param with shape torch.Size([256, 8, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for pretrained.layer1.4.2.bn2.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.2.bn2.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.2.bn2.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.2.bn2.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for pretrained.layer1.4.2.conv3.weight: copying a param with shape torch.Size([256, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 64, 1, 1]). size mismatch for pretrained.layer2.0.conv1.weight: copying a param with shape torch.Size([512, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 256, 1, 1]). size mismatch for pretrained.layer2.0.bn1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.0.bn1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.0.bn1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.0.bn1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.0.conv2.weight: copying a param with shape torch.Size([512, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]). size mismatch for pretrained.layer2.0.bn2.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.0.bn2.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.0.bn2.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.0.bn2.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.0.conv3.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 128, 1, 1]). size mismatch for pretrained.layer2.1.conv1.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 512, 1, 1]). size mismatch for pretrained.layer2.1.bn1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.1.bn1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.1.bn1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.1.bn1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.1.conv2.weight: copying a param with shape torch.Size([512, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]). size mismatch for pretrained.layer2.1.bn2.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.1.bn2.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.1.bn2.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.1.bn2.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.1.conv3.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 128, 1, 1]). size mismatch for pretrained.layer2.2.conv1.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 512, 1, 1]). size mismatch for pretrained.layer2.2.bn1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.2.bn1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.2.bn1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.2.bn1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.2.conv2.weight: copying a param with shape torch.Size([512, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]). size mismatch for pretrained.layer2.2.bn2.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.2.bn2.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.2.bn2.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.2.bn2.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.2.conv3.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 128, 1, 1]). size mismatch for pretrained.layer2.3.conv1.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 512, 1, 1]). size mismatch for pretrained.layer2.3.bn1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.3.bn1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.3.bn1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.3.bn1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.3.conv2.weight: copying a param with shape torch.Size([512, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]). size mismatch for pretrained.layer2.3.bn2.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.3.bn2.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.3.bn2.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.3.bn2.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for pretrained.layer2.3.conv3.weight: copying a param with shape torch.Size([512, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 128, 1, 1]). size mismatch for pretrained.layer3.0.conv1.weight: copying a param with shape torch.Size([1024, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 512, 1, 1]). size mismatch for pretrained.layer3.0.bn1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.0.bn1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.0.bn1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.0.bn1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.0.conv2.weight: copying a param with shape torch.Size([1024, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]). size mismatch for pretrained.layer3.0.bn2.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.0.bn2.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.0.bn2.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.0.bn2.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.0.conv3.weight: copying a param with shape torch.Size([1024, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([1024, 256, 1, 1]). size mismatch for pretrained.layer3.1.conv1.weight: copying a param with shape torch.Size([1024, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 1024, 1, 1]). size mismatch for pretrained.layer3.1.bn1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.1.bn1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.1.bn1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.1.bn1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.1.conv2.weight: copying a param with shape torch.Size([1024, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]). size mismatch for pretrained.layer3.1.bn2.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.1.bn2.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.1.bn2.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.1.bn2.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.1.conv3.weight: copying a param with shape torch.Size([1024, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([1024, 256, 1, 1]). size mismatch for pretrained.layer3.2.conv1.weight: copying a param with shape torch.Size([1024, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 1024, 1, 1]). size mismatch for pretrained.layer3.2.bn1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.2.bn1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.2.bn1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.2.bn1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.2.conv2.weight: copying a param with shape torch.Size([1024, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]). size mismatch for pretrained.layer3.2.bn2.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.2.bn2.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.2.bn2.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.2.bn2.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.2.conv3.weight: copying a param with shape torch.Size([1024, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([1024, 256, 1, 1]). size mismatch for pretrained.layer3.3.conv1.weight: copying a param with shape torch.Size([1024, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 1024, 1, 1]). size mismatch for pretrained.layer3.3.bn1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.3.bn1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.3.bn1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.3.bn1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.3.conv2.weight: copying a param with shape torch.Size([1024, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]). size mismatch for pretrained.layer3.3.bn2.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.3.bn2.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.3.bn2.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.3.bn2.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.3.conv3.weight: copying a param with shape torch.Size([1024, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([1024, 256, 1, 1]). size mismatch for pretrained.layer3.4.conv1.weight: copying a param with shape torch.Size([1024, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 1024, 1, 1]). size mismatch for pretrained.layer3.4.bn1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.4.bn1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.4.bn1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.4.bn1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.4.conv2.weight: copying a param with shape torch.Size([1024, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]). size mismatch for pretrained.layer3.4.bn2.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.4.bn2.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.4.bn2.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.4.bn2.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.4.conv3.weight: copying a param with shape torch.Size([1024, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([1024, 256, 1, 1]). size mismatch for pretrained.layer3.5.conv1.weight: copying a param with shape torch.Size([1024, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 1024, 1, 1]). size mismatch for pretrained.layer3.5.bn1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.5.bn1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.5.bn1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.5.bn1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.5.conv2.weight: copying a param with shape torch.Size([1024, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]). size mismatch for pretrained.layer3.5.bn2.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.5.bn2.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.5.bn2.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.5.bn2.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]). size mismatch for pretrained.layer3.5.conv3.weight: copying a param with shape torch.Size([1024, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([1024, 256, 1, 1]). size mismatch for pretrained.layer4.0.conv1.weight: copying a param with shape torch.Size([2048, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 1024, 1, 1]). size mismatch for pretrained.layer4.0.bn1.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.0.bn1.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.0.bn1.running_mean: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.0.bn1.running_var: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.0.conv2.weight: copying a param with shape torch.Size([2048, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for pretrained.layer4.0.bn2.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.0.bn2.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.0.bn2.running_mean: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.0.bn2.running_var: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.0.conv3.weight: copying a param with shape torch.Size([2048, 2048, 1, 1]) from checkpoint, the shape in current model is torch.Size([2048, 512, 1, 1]). size mismatch for pretrained.layer4.1.conv1.weight: copying a param with shape torch.Size([2048, 2048, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 2048, 1, 1]). size mismatch for pretrained.layer4.1.bn1.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.1.bn1.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.1.bn1.running_mean: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.1.bn1.running_var: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.1.conv2.weight: copying a param with shape torch.Size([2048, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for pretrained.layer4.1.bn2.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.1.bn2.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.1.bn2.running_mean: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.1.bn2.running_var: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.1.conv3.weight: copying a param with shape torch.Size([2048, 2048, 1, 1]) from checkpoint, the shape in current model is torch.Size([2048, 512, 1, 1]). size mismatch for pretrained.layer4.2.conv1.weight: copying a param with shape torch.Size([2048, 2048, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 2048, 1, 1]). size mismatch for pretrained.layer4.2.bn1.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.2.bn1.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.2.bn1.running_mean: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.2.bn1.running_var: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.2.conv2.weight: copying a param with shape torch.Size([2048, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]). size mismatch for pretrained.layer4.2.bn2.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.2.bn2.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.2.bn2.running_mean: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.2.bn2.running_var: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]). size mismatch for pretrained.layer4.2.conv3.weight: copying a param with shape torch.Size([2048, 2048, 1, 1]) from checkpoint, the shape in current model is torch.Size([2048, 512, 1, 1]).

(3DP) C:\Users\windows10\Desktop\3d\3d-photo-inpainting-master\3d-photo-inpainting-master>

and when e tested : setting gpu_ids=-1 in my argument.yml have same issues

when i tested : offscreen_rendering to True in my argument.yml (3DP) C:\Users\windows10\Desktop\3d\3d-photo-inpainting-master\3d-photo-inpainting-master>python main.py --config argument.yml Traceback (most recent call last): File "main.py", line 30, in vispy.use(app='egl') File "C:\Users\windows10\anaconda3\envs\3DP\lib\site-packages\vispy\util\wrappers.py", line 97, in use use_app(app) File "C:\Users\windows10\anaconda3\envs\3DP\lib\site-packages\vispy\app_default_app.py", line 47, in use_app default_app = Application(backend_name) File "C:\Users\windows10\anaconda3\envs\3DP\lib\site-packages\vispy\app\application.py", line 49, in init self._use(backend_name) File "C:\Users\windows10\anaconda3\envs\3DP\lib\site-packages\vispy\app\application.py", line 235, in _use raise RuntimeError(msg) RuntimeError: Could not import backend "EGL": EGL library not found

jili5ms commented 4 years ago

same here. anyone having a solution? i used the same win branch: https://github.com/ababilinski/3d-photo-inpainting/tree/feature/windows-conda-support

jili5ms commented 4 years ago

it seems that the model got updated (size and architecture) the model from download.sh is no longer available

so is it possible to have a quick fix by getting the old model version back for the windows branch?

jili5ms commented 4 years ago

well, fixed it. we can make it work using the main branch. the window branch is not updated. for win10 users, if you see this error:

    if self.version == other.version:
AttributeError: 'LooseVersion' object has no attribute 'version'

you can install opengl viewer and driver (https://developer.nvidia.com/opengl-driver) and check to see if your system works with the vispy (https://github.com/vispy/vispy/issues/1342)

from vispy.app import use_app, Canvas
from vispy.gloo import gl
app = use_app()
canvas = Canvas('Test', (10, 10), show=False, app=app)
print(canvas)
print(canvas._backend._vispy_set_current())
print(gl.glGetParameter(gl.GL_VERSION))