pytorch / vision

Datasets, Transforms and Models specific to Computer Vision
https://pytorch.org/vision
BSD 3-Clause "New" or "Revised" License
16.3k stars 6.96k forks source link

size mismatch for rpn #8588

Closed FiReTiTi closed 3 months ago

FiReTiTi commented 3 months ago

🐛 Describe the bug

I created a Mask R-CNN model using a set of parameters that I saved in a JSON file. Once the model was trained, I saved the weights using torch.save(model.state_dict(), "MaskRCNN.pt"). Later, I recreated the same model and loaded the saved weights model.load_state_dict(torch.load("MaskRCNN.pt", map_location=Device)).

On my laptop (MacBook Pro M2) using Torch 2.2.2, TorchVision 0.17.2 (most up to date for this environment), and CPU only, everything works just fine.

However, on a cluster based on Centos with Torch 2.4, TorchVision 0.19 (most up to date for this environment), and Cuda 12.1.1, I get the following error when loading the weights:

  File "/home/XXX//MaskRCNN.py", line 84, in Load
      model.load_state_dict(torch.load(WeightsPath, map_location=Device))
    File "/home/XXX/torch/nn/modules/module.py", line 2215, in load_state_dict
      raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
  RuntimeError: Error(s) in loading state_dict for MaskRCNN:
    size mismatch for rpn.head.cls_logits.weight: copying a param with shape torch.Size([6, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([14, 256, 1, 1]).
    size mismatch for rpn.head.cls_logits.bias: copying a param with shape torch.Size([6]) from checkpoint, the shape in current model is torch.Size([14]).
    size mismatch for rpn.head.bbox_pred.weight: copying a param with shape torch.Size([24, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([56, 256, 1, 1]).
    size mismatch for rpn.head.bbox_pred.bias: copying a param with shape torch.Size([24]) from checkpoint, the shape in current model is torch.Size([56]).

The code is exactly the same on my laptop and on the cluster. I double checked, and I used exactly the same parameters to create ALL the models.

How can I fix this?

Versions

Collecting environment information... PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A

OS: CentOS Linux release 7.9.2009 (Core) (x86_64) GCC version: (GCC) 13.2.0 Clang version: Could not collect CMake version: version 2.8.12.2 Libc version: glibc-2.17

Python version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-3.10.0-957.10.1.el7.x86_64-x86_64-with-glibc2.17 Is CUDA available: True CUDA runtime version: 12.1.105 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: Tesla V100-SXM2-32GB GPU 1: Tesla V100-SXM2-32GB GPU 2: Tesla V100-SXM2-32GB GPU 3: Tesla V100-SXM2-32GB GPU 4: Tesla V100-SXM2-32GB GPU 5: Tesla V100-SXM2-32GB GPU 6: Tesla V100-SXM2-32GB GPU 7: Tesla V100-SXM2-32GB

Nvidia driver version: 550.90.07 cuDNN version: Probably one of the following: /usr/local/cuda-9.1/targets/x86_64-linux/lib/libcudnn.so.7.0.5 /usr/local/cuda-9.2/targets/x86_64-linux/lib/libcudnn.so.7.2.1 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 80 On-line CPU(s) list: 0-79 Thread(s) per core: 2 Core(s) per socket: 20 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz Stepping: 4 CPU MHz: 1000.000 CPU max MHz: 2401.0000 CPU min MHz: 1000.0000 BogoMIPS: 4800.00 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 1024K L3 cache: 28160K NUMA node0 CPU(s): 0-19,40-59 NUMA node1 CPU(s): 20-39,60-79 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_pt ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke spec_ctrl intel_stibp flush_l1d

Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] numpydoc==1.8.0 [pip3] torch==2.4.0 [pip3] torchsummary==1.5.1 [pip3] torchvision==0.19.0 [pip3] triton==3.0.0 [conda] numpy 1.26.4 pypi_0 pypi [conda] numpydoc 1.8.0 pypi_0 pypi [conda] torch 2.4.0 pypi_0 pypi [conda] torchsummary 1.5.1 pypi_0 pypi [conda] torchvision 0.19.0 pypi_0 pypi [conda] triton 3.0.0 pypi_0 pypi

I used modules to load the drivers, so here are more information: $ nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Mon_Apr__3_17:16:06_PDT_2023 Cuda compilation tools, release 12.1, V12.1.105 Build cuda_12.1.r12.1/compiler.32688072_0

nvidia-smi Wed Aug 14 04:07:42 2024
+-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.90.07 Driver Version: 550.90.07 CUDA Version: 12.4

NicolasHug commented 3 months ago

Hi @FiReTiTi

We didn't change anything to Mask-RCNN in-between these versions. I'm trying to rule-out torchvision as the source of discrepancy: can you check whether the output of torch.load(WeightsPath, map_location=Device) is the same in both envs?

FiReTiTi commented 3 months ago

@NicolasHug Thank you for your answer. I printed the outputs in both environment. As one has GPUs and the other does not, I removed ,device='cuda:0' from the file exported on the GPUs environment. Then I didn't find any difference between the files using the linux diffcommand.

FiReTiTi commented 3 months ago

I printed the models before training and after training (when recreated to load the trained weights). The RPNs seems to be the same, see attached screenshot. In fact, the model entire print is the same (compared with diff command). Screenshot 2024-08-15 at 01 54 51

FiReTiTi commented 3 months ago

Mmm... mea culpa. It seems that a corrupted file appeared in the middle of the wrong directory and replaced the proper weights. Everything is working fine now.