ShanWang-Shan / HomoFusion

Code for ICCV2023 paper: Homography Guided Temporal Fusion for Road Line and Marking Segmentation
Other
9 stars 0 forks source link

Error of checkpoint #5

Open Geralt-of-winterfall opened 1 month ago

Geralt-of-winterfall commented 1 month ago

报错如下所示。使用的是README中提供的预训练模型Apolloscape/model.ckpt,似乎是模型尺寸不匹配。

>  python3 scripts/benchmark_val.py                                                                                                                                                                                                     

Global seed set to 2022
/media/user/opensourceDataset/Apollo/val.txt
/media/user/opensourceDataset/Apollo/road05_Pose/Record001/Camera 5/pose.txt
/media/user/opensourceDataset/Apollo/road05_Pose/Record001/Camera 6/pose.txt
Loaded pretrained weights for efficientnet-b6
Error executing job with overrides: []
Traceback (most recent call last):
  File "scripts/benchmark_val.py", line 67, in main
    network = load_backbone(CHECKPOINT_PATH)
  File "/home/user/桌面/200_Github_Repository/HomoFusion/homo_transformer/common.py", line 77, in load_backbone
    backbone.load_state_dict(state_dict)
  File "/home/ruser/miniconda3/envs/homof/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1497, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for HomoTransformer:
        Unexpected key(s) in state_dict: "encoder.backbone.layers.1.4._expand_conv.weight", "encoder.backbone.layers.1.4._bn0.weight", "encoder.backbone.layers.1.4._bn0.bias", "encoder.backbone.layers.1.4._bn0.running_mean", "encoder.backbone.layers.1.4._bn0.running_var", "encoder.backbone.layers.1.4._bn0.num_batches_tracked", "encoder.backbone.layers.1.4._depthwise_conv.weight", "encoder.backbone.layers.1.4._bn1.weight", "encoder.backbone.layers.1.4._bn1.bias", "encoder.backbone.layers.1.4._bn1.running_mean", "encoder.backbone.layers.1.4._bn1.running_var", "encoder.backbone.layers.1.4._bn1.num_batches_tracked", "encoder.backbone.layers.1.4._se_reduce.weight", "encoder.backbone.layers.1.4._se_reduce.bias", "encoder.backbone.layers.1.4._se_expand.weight", "encoder.backbone.layers.1.4._se_expand.bias", "encoder.backbone.layers.1.4._project_conv.weight", "encoder.backbone.layers.1.4._bn2.weight", "encoder.backbone.layers.1.4._bn2.bias", "encoder.backbone.layers.1.4._bn2.running_mean", "encoder.backbone.layers.1.4._bn2.running_var", "encoder.backbone.layers.1.4._bn2.num_batches_tracked", "encoder.backbone.layers.1.5._expand_conv.weight", "encoder.backbone.layers.1.5._bn0.weight", "encoder.backbone.layers.1.5._bn0.bias", "encoder.backbone.layers.1.5._bn0.running_mean", "encoder.backbone.layers.1.5._bn0.running_var", "encoder.backbone.layers.1.5._bn0.num_batches_tracked", "encoder.backbone.layers.1.5._depthwise_conv.weight", "encoder.backbone.layers.1.5._bn1.weight", "encoder.backbone.layers.1.5._bn1.bias", "encoder.backbone.layers.1.5._bn1.running_mean", "encoder.backbone.layers.1.5._bn1.running_var", "encoder.backbone.layers.1.5._bn1.num_batches_tracked", "encoder.backbone.layers.1.5._se_reduce.weight", "encoder.backbone.layers.1.5._se_reduce.bias", "encoder.backbone.layers.1.5._se_expand.weight", "encoder.backbone.layers.1.5._se_expand.bias", "encoder.backbone.layers.1.5._project_conv.weight", "encoder.backbone.layers.1.5._bn2.weight", "encoder.backbone.layers.1.5._bn2.bias", "encoder.backbone.layers.1.5._bn2.running_mean", "encoder.backbone.layers.1.5._bn2.running_var", "encoder.backbone.layers.1.5._bn2.num_batches_tracked". 
        size mismatch for encoder.backbone.layers.2.3._depthwise_conv.weight: copying a param with shape torch.Size([240, 1, 5, 5]) from checkpoint, the shape in current model is torch.Size([240, 1, 3, 3]).
        size mismatch for encoder.backbone.layers.2.3._project_conv.weight: copying a param with shape torch.Size([72, 240, 1, 1]) from checkpoint, the shape in current model is torch.Size([40, 240, 1, 1]).
        size mismatch for encoder.backbone.layers.2.3._bn2.weight: copying a param with shape torch.Size([72]) from checkpoint, the shape in current model is torch.Size([40]).
        size mismatch for encoder.backbone.layers.2.3._bn2.bias: copying a param with shape torch.Size([72]) from checkpoint, the shape in current model is torch.Size([40]).
        size mismatch for encoder.backbone.layers.2.3._bn2.running_mean: copying a param with shape torch.Size([72]) from checkpoint, the shape in current model is torch.Size([40]).
        size mismatch for encoder.backbone.layers.2.3._bn2.running_var: copying a param with shape torch.Size([72]) from checkpoint, the shape in current model is torch.Size([40]).
        size mismatch for encoder.backbone.layers.2.4._expand_conv.weight: copying a param with shape torch.Size([432, 72, 1, 1]) from checkpoint, the shape in current model is torch.Size([240, 40, 1, 1]).
        size mismatch for encoder.backbone.layers.2.4._bn0.weight: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([240]).
        size mismatch for encoder.backbone.layers.2.4._bn0.bias: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([240]).
        size mismatch for encoder.backbone.layers.2.4._bn0.running_mean: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([240]).
        size mismatch for encoder.backbone.layers.2.4._bn0.running_var: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([240]).
        size mismatch for encoder.backbone.layers.2.4._depthwise_conv.weight: copying a param with shape torch.Size([432, 1, 5, 5]) from checkpoint, the shape in current model is torch.Size([240, 1, 3, 3]).
        size mismatch for encoder.backbone.layers.2.4._bn1.weight: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([240]).
        size mismatch for encoder.backbone.layers.2.4._bn1.bias: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([240]).
        size mismatch for encoder.backbone.layers.2.4._bn1.running_mean: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([240]).
        size mismatch for encoder.backbone.layers.2.4._bn1.running_var: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([240]).
        size mismatch for encoder.backbone.layers.2.4._se_reduce.weight: copying a param with shape torch.Size([18, 432, 1, 1]) from checkpoint, the shape in current model is torch.Size([10, 240, 1, 1]).
        size mismatch for encoder.backbone.layers.2.4._se_reduce.bias: copying a param with shape torch.Size([18]) from checkpoint, the shape in current model is torch.Size([10]).
        size mismatch for encoder.backbone.layers.2.4._se_expand.weight: copying a param with shape torch.Size([432, 18, 1, 1]) from checkpoint, the shape in current model is torch.Size([240, 10, 1, 1]).
        size mismatch for encoder.backbone.layers.2.4._se_expand.bias: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([240]).
        size mismatch for encoder.backbone.layers.2.4._project_conv.weight: copying a param with shape torch.Size([72, 432, 1, 1]) from checkpoint, the shape in current model is torch.Size([40, 240, 1, 1]).
        size mismatch for encoder.backbone.layers.2.4._bn2.weight: copying a param with shape torch.Size([72]) from checkpoint, the shape in current model is torch.Size([40]).
        size mismatch for encoder.backbone.layers.2.4._bn2.bias: copying a param with shape torch.Size([72]) from checkpoint, the shape in current model is torch.Size([40]).
        size mismatch for encoder.backbone.layers.2.4._bn2.running_mean: copying a param with shape torch.Size([72]) from checkpoint, the shape in current model is torch.Size([40]).
        size mismatch for encoder.backbone.layers.2.4._bn2.running_var: copying a param with shape torch.Size([72]) from checkpoint, the shape in current model is torch.Size([40]).
        size mismatch for encoder.backbone.layers.2.5._expand_conv.weight: copying a param with shape torch.Size([432, 72, 1, 1]) from checkpoint, the shape in current model is torch.Size([240, 40, 1, 1]).
        size mismatch for encoder.backbone.layers.2.5._bn0.weight: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([240]).
        size mismatch for encoder.backbone.layers.2.5._bn0.bias: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([240]).
        size mismatch for encoder.backbone.layers.2.5._bn0.running_mean: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([240]).
        size mismatch for encoder.backbone.layers.2.5._bn0.running_var: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([240]).
        size mismatch for encoder.backbone.layers.2.5._depthwise_conv.weight: copying a param with shape torch.Size([432, 1, 5, 5]) from checkpoint, the shape in current model is torch.Size([240, 1, 5, 5]).
        size mismatch for encoder.backbone.layers.2.5._bn1.weight: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([240]).
        size mismatch for encoder.backbone.layers.2.5._bn1.bias: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([240]).
        size mismatch for encoder.backbone.layers.2.5._bn1.running_mean: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([240]).
        size mismatch for encoder.backbone.layers.2.5._bn1.running_var: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([240]).
        size mismatch for encoder.backbone.layers.2.5._se_reduce.weight: copying a param with shape torch.Size([18, 432, 1, 1]) from checkpoint, the shape in current model is torch.Size([10, 240, 1, 1]).
        size mismatch for encoder.backbone.layers.2.5._se_reduce.bias: copying a param with shape torch.Size([18]) from checkpoint, the shape in current model is torch.Size([10]).
        size mismatch for encoder.backbone.layers.2.5._se_expand.weight: copying a param with shape torch.Size([432, 18, 1, 1]) from checkpoint, the shape in current model is torch.Size([240, 10, 1, 1]).
        size mismatch for encoder.backbone.layers.2.5._se_expand.bias: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([240]).
        size mismatch for encoder.backbone.layers.2.5._project_conv.weight: copying a param with shape torch.Size([72, 432, 1, 1]) from checkpoint, the shape in current model is torch.Size([72, 240, 1, 1]).
        size mismatch for encoder.backbone.layers.3.3._depthwise_conv.weight: copying a param with shape torch.Size([432, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([432, 1, 5, 5]).
        size mismatch for encoder.backbone.layers.3.3._project_conv.weight: copying a param with shape torch.Size([144, 432, 1, 1]) from checkpoint, the shape in current model is torch.Size([72, 432, 1, 1]).
        size mismatch for encoder.backbone.layers.3.3._bn2.weight: copying a param with shape torch.Size([144]) from checkpoint, the shape in current model is torch.Size([72]).
        size mismatch for encoder.backbone.layers.3.3._bn2.bias: copying a param with shape torch.Size([144]) from checkpoint, the shape in current model is torch.Size([72]).
        size mismatch for encoder.backbone.layers.3.3._bn2.running_mean: copying a param with shape torch.Size([144]) from checkpoint, the shape in current model is torch.Size([72]).
        size mismatch for encoder.backbone.layers.3.3._bn2.running_var: copying a param with shape torch.Size([144]) from checkpoint, the shape in current model is torch.Size([72]).
        size mismatch for encoder.backbone.layers.3.4._expand_conv.weight: copying a param with shape torch.Size([864, 144, 1, 1]) from checkpoint, the shape in current model is torch.Size([432, 72, 1, 1]).
        size mismatch for encoder.backbone.layers.3.4._bn0.weight: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([432]).
        size mismatch for encoder.backbone.layers.3.4._bn0.bias: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([432]).
        size mismatch for encoder.backbone.layers.3.4._bn0.running_mean: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([432]).
        size mismatch for encoder.backbone.layers.3.4._bn0.running_var: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([432]).
        size mismatch for encoder.backbone.layers.3.4._depthwise_conv.weight: copying a param with shape torch.Size([864, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([432, 1, 5, 5]).
        size mismatch for encoder.backbone.layers.3.4._bn1.weight: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([432]).
        size mismatch for encoder.backbone.layers.3.4._bn1.bias: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([432]).
        size mismatch for encoder.backbone.layers.3.4._bn1.running_mean: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([432]).
        size mismatch for encoder.backbone.layers.3.4._bn1.running_var: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([432]).
        size mismatch for encoder.backbone.layers.3.4._se_reduce.weight: copying a param with shape torch.Size([36, 864, 1, 1]) from checkpoint, the shape in current model is torch.Size([18, 432, 1, 1]).
        size mismatch for encoder.backbone.layers.3.4._se_reduce.bias: copying a param with shape torch.Size([36]) from checkpoint, the shape in current model is torch.Size([18]).
        size mismatch for encoder.backbone.layers.3.4._se_expand.weight: copying a param with shape torch.Size([864, 36, 1, 1]) from checkpoint, the shape in current model is torch.Size([432, 18, 1, 1]).
        size mismatch for encoder.backbone.layers.3.4._se_expand.bias: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([432]).
        size mismatch for encoder.backbone.layers.3.4._project_conv.weight: copying a param with shape torch.Size([144, 864, 1, 1]) from checkpoint, the shape in current model is torch.Size([72, 432, 1, 1]).
        size mismatch for encoder.backbone.layers.3.4._bn2.weight: copying a param with shape torch.Size([144]) from checkpoint, the shape in current model is torch.Size([72]).
        size mismatch for encoder.backbone.layers.3.4._bn2.bias: copying a param with shape torch.Size([144]) from checkpoint, the shape in current model is torch.Size([72]).
        size mismatch for encoder.backbone.layers.3.4._bn2.running_mean: copying a param with shape torch.Size([144]) from checkpoint, the shape in current model is torch.Size([72]).
        size mismatch for encoder.backbone.layers.3.4._bn2.running_var: copying a param with shape torch.Size([144]) from checkpoint, the shape in current model is torch.Size([72]).
        size mismatch for encoder.backbone.layers.3.5._expand_conv.weight: copying a param with shape torch.Size([864, 144, 1, 1]) from checkpoint, the shape in current model is torch.Size([432, 72, 1, 1]).
        size mismatch for encoder.backbone.layers.3.5._bn0.weight: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([432]).
        size mismatch for encoder.backbone.layers.3.5._bn0.bias: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([432]).
        size mismatch for encoder.backbone.layers.3.5._bn0.running_mean: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([432]).
        size mismatch for encoder.backbone.layers.3.5._bn0.running_var: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([432]).
        size mismatch for encoder.backbone.layers.3.5._depthwise_conv.weight: copying a param with shape torch.Size([864, 1, 3, 3]) from checkpoint, the shape in current model is torch.Size([432, 1, 3, 3]).
        size mismatch for encoder.backbone.layers.3.5._bn1.weight: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([432]).
        size mismatch for encoder.backbone.layers.3.5._bn1.bias: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([432]).
        size mismatch for encoder.backbone.layers.3.5._bn1.running_mean: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([432]).
        size mismatch for encoder.backbone.layers.3.5._bn1.running_var: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([432]).
        size mismatch for encoder.backbone.layers.3.5._se_reduce.weight: copying a param with shape torch.Size([36, 864, 1, 1]) from checkpoint, the shape in current model is torch.Size([18, 432, 1, 1]).
        size mismatch for encoder.backbone.layers.3.5._se_reduce.bias: copying a param with shape torch.Size([36]) from checkpoint, the shape in current model is torch.Size([18]).
        size mismatch for encoder.backbone.layers.3.5._se_expand.weight: copying a param with shape torch.Size([864, 36, 1, 1]) from checkpoint, the shape in current model is torch.Size([432, 18, 1, 1]).
        size mismatch for encoder.backbone.layers.3.5._se_expand.bias: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([432]).
        size mismatch for encoder.backbone.layers.3.5._project_conv.weight: copying a param with shape torch.Size([144, 864, 1, 1]) from checkpoint, the shape in current model is torch.Size([144, 432, 1, 1]).

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
Geralt-of-winterfall commented 1 month ago

另外,请问模型需要哪些输入,需要单帧图像的位姿吗?

bobododosjl commented 1 month ago

我也遇到这个问题了,你下载的数据是lane_segmentation_sample吗?

ShanWang-Shan commented 1 month ago

Sorry, I’m a bit busy at the moment, but I will look into this in the future. I'm using the lane_segmentation dataset, but not limited to the lane_segmentation_sample. Yes, frame extrinsics are required, and we have updated the README to include the pose information we used, which is originally from the self-localization dataset. For details on inputs, please refer to '~homo_transformer/data/apolloscape_dataset.py'

Geralt-of-winterfall commented 1 month ago

我也遇到这个问题了,你下载的数据是lane_segmentation_sample吗?

不是,是ColorImage_road05,test序列。lane_segmentation_sample跑过也有问题。

bobododosjl commented 1 month ago

嗯,那就是checkpoint的问题

bobododosjl commented 1 month ago

有尝试自己训练吗?什么卡。

Geralt-of-winterfall commented 1 month ago

没有自己训练,想先试一试推理的效果

CCodie commented 1 month ago

Same problem here, it seems like because of the checkpoint file.

cccober commented 1 week ago

so did it work?

ShanWang-Shan commented 1 week ago

I mistakenly uploaded the wrong checkpoint. Please use the updated EfficientNet-B4 instead: https://drive.google.com/file/d/1ULLm16pedeoZ0OYOmsmdtDSSkpcVoBhc/view?usp=drive_link