GAP-LAB-CUHK-SZ / InstPIFu

repository of "Towards High-Fidelity Single-view Holistic Reconstruction of Indoor Scenes" ECCV2022
100 stars 8 forks source link

The pretrained weight of the background reconstruction #25

Open SunWeiLin-Lynne opened 1 year ago

SunWeiLin-Lynne commented 1 year ago

Hello, thank you very much for your excellent work! When I load the pretrained weight of the background(model_best_bg.pth), the following problem occurred. Can you update the pretrained weights for background reconstruction again?

*** RuntimeError: Error(s) in loading state_dict for BGPIFu_Net:
        Missing key(s) in state_dict: "global_encoder.conv1.weight", "global_encoder.bn1.weight", "global_encoder.bn1.bias", "global_encoder.bn1.running_mean", "global_encoder.bn1.running_var", "global_encoder.layer1.0.conv1.weight", "global_encoder.layer1.0.conv2.weight", "global_encoder.layer1.1.conv1.weight", "global_encoder.layer1.1.conv2.weight", "global_encoder.layer2.0.conv1.weight", "global_encoder.layer2.0.conv2.weight", "global_encoder.layer2.0.downsample.0.weight", "global_encoder.layer2.0.downsample.1.weight", "global_encoder.layer2.0.downsample.1.bias", "global_encoder.layer2.0.downsample.1.running_mean", "global_encoder.layer2.0.downsample.1.running_var", "global_encoder.layer2.1.conv1.weight", "global_encoder.layer2.1.conv2.weight", "global_encoder.layer3.0.conv1.weight", "global_encoder.layer3.0.conv2.weight", "global_encoder.layer3.0.downsample.0.weight", "global_encoder.layer3.0.downsample.1.weight", "global_encoder.layer3.0.downsample.1.bias", "global_encoder.layer3.0.downsample.1.running_mean", "global_encoder.layer3.0.downsample.1.running_var", "global_encoder.layer3.1.conv1.weight", "global_encoder.layer3.1.conv2.weight", "global_encoder.layer4.0.conv1.weight", "global_encoder.layer4.0.conv2.weight", "global_encoder.layer4.0.downsample.0.weight", "global_encoder.layer4.0.downsample.1.weight", "global_encoder.layer4.0.downsample.1.bias", "global_encoder.layer4.0.downsample.1.running_mean", "global_encoder.layer4.0.downsample.1.running_var", "global_encoder.layer4.1.conv1.weight", "global_encoder.layer4.1.conv2.weight", "global_encoder.fc.weight", "global_encoder.fc.bias".
        Unexpected key(s) in state_dict: "global_surface_classifier.conv0.weight", "global_surface_classifier.conv0.bias", "global_surface_classifier.conv1.weight", "global_surface_classifier.conv1.bias", "global_surface_classifier.conv2.weight", "global_surface_classifier.conv2.bias", "global_surface_classifier.conv3.weight", "global_surface_classifier.conv3.bias", "mask_decoder.0.weight", "mask_decoder.0.bias", "mask_decoder.2.weight", "mask_decoder.2.bias", "mask_decoder.4.weight", "mask_decoder.4.bias", "mask_decoder.6.weight", "mask_decoder.6.bias", "post_op_module.channel_mlp.0.weight", "post_op_module.channel_mlp.0.bias", "post_op_module.channel_mlp.2.weight", "post_op_module.channel_mlp.2.bias", "post_op_module.channel_mlp.4.weight", "post_op_module.channel_mlp.4.bias", "post_op_module.post_conv.0.weight", "post_op_module.post_conv.0.bias", "post_op_module.post_conv.2.weight", "post_op_module.post_conv.2.bias", "post_op_module.post_conv.4.weight", "post_op_module.post_conv.4.bias", "post_op_module.post_conv.6.weight", "post_op_module.post_conv.6.bias", "post_op_module.pre_conv.0.weight", "post_op_module.pre_conv.0.bias", "post_op_module.pre_conv.2.weight", "post_op_module.pre_conv.2.bias", "global_encoder.0.weight", "global_encoder.0.bias", "global_encoder.1.conv1.weight", "global_encoder.1.bn1.weight", "global_encoder.1.bn1.bias", "global_encoder.1.bn1.running_mean", "global_encoder.1.bn1.running_var", "global_encoder.1.bn1.num_batches_tracked", "global_encoder.1.layer1.0.conv1.weight", "global_encoder.1.layer1.0.conv2.weight", "global_encoder.1.layer1.1.conv1.weight", "global_encoder.1.layer1.1.conv2.weight", "global_encoder.1.layer2.0.conv1.weight", "global_encoder.1.layer2.0.conv2.weight", "global_encoder.1.layer2.0.downsample.0.weight", "global_encoder.1.layer2.0.downsample.1.weight", "global_encoder.1.layer2.0.downsample.1.bias", "global_encoder.1.layer2.0.downsample.1.running_mean", "global_encoder.1.layer2.0.downsample.1.running_var", "global_encoder.1.layer2.0.downsample.1.num_batches_tracked", "global_encoder.1.layer2.1.conv1.weight", "global_encoder.1.layer2.1.conv2.weight", "global_encoder.1.layer3.0.conv1.weight", "global_encoder.1.layer3.0.conv2.weight", "global_encoder.1.layer3.0.downsample.0.weight", "global_encoder.1.layer3.0.downsample.1.weight", "global_encoder.1.layer3.0.downsample.1.bias", "global_encoder.1.layer3.0.downsample.1.running_mean", "global_encoder.1.layer3.0.downsample.1.running_var", "global_encoder.1.layer3.0.downsample.1.num_batches_tracked", "global_encoder.1.layer3.1.conv1.weight", "global_encoder.1.layer3.1.conv2.weight", "global_encoder.1.layer4.0.conv1.weight", "global_encoder.1.layer4.0.conv2.weight", "global_encoder.1.layer4.0.downsample.0.weight", "global_encoder.1.layer4.0.downsample.1.weight", "global_encoder.1.layer4.0.downsample.1.bias", "global_encoder.1.layer4.0.downsample.1.running_mean", "global_encoder.1.layer4.0.downsample.1.running_var", "global_encoder.1.layer4.0.downsample.1.num_batches_tracked", "global_encoder.1.layer4.1.conv1.weight", "global_encoder.1.layer4.1.conv2.weight", "global_encoder.2.weight", "global_encoder.2.bias", "global_encoder.4.weight", "global_encoder.4.bias", "global_encoder.6.weight", "global_encoder.6.bias".
        size mismatch for surface_classifier.conv0.weight: copying a param with shape torch.Size([1024, 549, 1]) from checkpoint, the shape in current model is torch.Size([1024, 1283, 1]).
        size mismatch for surface_classifier.conv1.weight: copying a param with shape torch.Size([512, 1573, 1]) from checkpoint, the shape in current model is torch.Size([512, 2307, 1]).
        size mismatch for surface_classifier.conv2.weight: copying a param with shape torch.Size([256, 1061, 1]) from checkpoint, the shape in current model is torch.Size([256, 1795, 1]).
        size mismatch for surface_classifier.conv3.weight: copying a param with shape torch.Size([128, 805, 1]) from checkpoint, the shape in current model is torch.Size([128, 1539, 1]).
        size mismatch for surface_classifier.conv4.weight: copying a param with shape torch.Size([1, 677, 1]) from checkpoint, the shape in current model is torch.Size([1, 1411, 1]).
HaolinLiu97 commented 1 year ago

Hi, Sorry for the late reply. I have checked the weight again, the model_best_bg.pth should be correct, is it possible that you loads the pretrained weight for the object model into the background network? here are the weight keys for the model_best_bg.pth odict_keys(['module.image_filter.conv1.weight', 'module.image_filter.conv1.bias', 'module.image_filter.bn1.weight', 'module.image_filter.bn1.bias', 'module.image_filter.conv2.conv1.weight', 'module.image_filter.conv2.conv2.weight', 'module.image_filter.conv2.conv3.weight', 'module.image_filter.conv2.bn1.weight', 'module.image_filter.conv2.bn1.bias', 'module.image_filter.conv2.bn2.weight', 'module.image_filter.conv2.bn2.bias', 'module.image_filter.conv2.bn3.weight', 'module.image_filter.conv2.bn3.bias', 'module.image_filter.conv2.bn4.weight', 'module.image_filter.conv2.bn4.bias', 'module.image_filter.conv2.downsample.0.weight', 'module.image_filter.conv2.downsample.0.bias', 'module.image_filter.conv2.downsample.2.weight', 'module.image_filter.conv3.conv1.weight', 'module.image_filter.conv3.conv2.weight', 'module.image_filter.conv3.conv3.weight', 'module.image_filter.conv3.bn1.weight', 'module.image_filter.conv3.bn1.bias', 'module.image_filter.conv3.bn2.weight', 'module.image_filter.conv3.bn2.bias', 'module.image_filter.conv3.bn3.weight', 'module.image_filter.conv3.bn3.bias', 'module.image_filter.conv3.bn4.weight', 'module.image_filter.conv3.bn4.bias', 'module.image_filter.conv4.conv1.weight', 'module.image_filter.conv4.conv2.weight', 'module.image_filter.conv4.conv3.weight', 'module.image_filter.conv4.bn1.weight', 'module.image_filter.conv4.bn1.bias', 'module.image_filter.conv4.bn2.weight', 'module.image_filter.conv4.bn2.bias', 'module.image_filter.conv4.bn3.weight', 'module.image_filter.conv4.bn3.bias', 'module.image_filter.conv4.bn4.weight', 'module.image_filter.conv4.bn4.bias', 'module.image_filter.conv4.downsample.0.weight', 'module.image_filter.conv4.downsample.0.bias', 'module.image_filter.conv4.downsample.2.weight', 'module.image_filter.m0.b1_2.conv1.weight', 'module.image_filter.m0.b1_2.conv2.weight', 'module.image_filter.m0.b1_2.conv3.weight', 'module.image_filter.m0.b1_2.bn1.weight', 'module.image_filter.m0.b1_2.bn1.bias', 'module.image_filter.m0.b1_2.bn2.weight', 'module.image_filter.m0.b1_2.bn2.bias', 'module.image_filter.m0.b1_2.bn3.weight', 'module.image_filter.m0.b1_2.bn3.bias', 'module.image_filter.m0.b1_2.bn4.weight', 'module.image_filter.m0.b1_2.bn4.bias', 'module.image_filter.m0.b2_2.conv1.weight', 'module.image_filter.m0.b2_2.conv2.weight', 'module.image_filter.m0.b2_2.conv3.weight', 'module.image_filter.m0.b2_2.bn1.weight', 'module.image_filter.m0.b2_2.bn1.bias', 'module.image_filter.m0.b2_2.bn2.weight', 'module.image_filter.m0.b2_2.bn2.bias', 'module.image_filter.m0.b2_2.bn3.weight', 'module.image_filter.m0.b2_2.bn3.bias', 'module.image_filter.m0.b2_2.bn4.weight', 'module.image_filter.m0.b2_2.bn4.bias', 'module.image_filter.m0.b1_1.conv1.weight', 'module.image_filter.m0.b1_1.conv2.weight', 'module.image_filter.m0.b1_1.conv3.weight', 'module.image_filter.m0.b1_1.bn1.weight', 'module.image_filter.m0.b1_1.bn1.bias', 'module.image_filter.m0.b1_1.bn2.weight', 'module.image_filter.m0.b1_1.bn2.bias', 'module.image_filter.m0.b1_1.bn3.weight', 'module.image_filter.m0.b1_1.bn3.bias', 'module.image_filter.m0.b1_1.bn4.weight', 'module.image_filter.m0.b1_1.bn4.bias', 'module.image_filter.m0.b2_1.conv1.weight', 'module.image_filter.m0.b2_1.conv2.weight', 'module.image_filter.m0.b2_1.conv3.weight', 'module.image_filter.m0.b2_1.bn1.weight', 'module.image_filter.m0.b2_1.bn1.bias', 'module.image_filter.m0.b2_1.bn2.weight', 'module.image_filter.m0.b2_1.bn2.bias', 'module.image_filter.m0.b2_1.bn3.weight', 'module.image_filter.m0.b2_1.bn3.bias', 'module.image_filter.m0.b2_1.bn4.weight', 'module.image_filter.m0.b2_1.bn4.bias', 'module.image_filter.m0.b2_plus_1.conv1.weight', 'module.image_filter.m0.b2_plus_1.conv2.weight', 'module.image_filter.m0.b2_plus_1.conv3.weight', 'module.image_filter.m0.b2_plus_1.bn1.weight', 'module.image_filter.m0.b2_plus_1.bn1.bias', 'module.image_filter.m0.b2_plus_1.bn2.weight', 'module.image_filter.m0.b2_plus_1.bn2.bias', 'module.image_filter.m0.b2_plus_1.bn3.weight', 'module.image_filter.m0.b2_plus_1.bn3.bias', 'module.image_filter.m0.b2_plus_1.bn4.weight', 'module.image_filter.m0.b2_plus_1.bn4.bias', 'module.image_filter.m0.b3_1.conv1.weight', 'module.image_filter.m0.b3_1.conv2.weight', 'module.image_filter.m0.b3_1.conv3.weight', 'module.image_filter.m0.b3_1.bn1.weight', 'module.image_filter.m0.b3_1.bn1.bias', 'module.image_filter.m0.b3_1.bn2.weight', 'module.image_filter.m0.b3_1.bn2.bias', 'module.image_filter.m0.b3_1.bn3.weight', 'module.image_filter.m0.b3_1.bn3.bias', 'module.image_filter.m0.b3_1.bn4.weight', 'module.image_filter.m0.b3_1.bn4.bias', 'module.image_filter.m0.b3_2.conv1.weight', 'module.image_filter.m0.b3_2.conv2.weight', 'module.image_filter.m0.b3_2.conv3.weight', 'module.image_filter.m0.b3_2.bn1.weight', 'module.image_filter.m0.b3_2.bn1.bias', 'module.image_filter.m0.b3_2.bn2.weight', 'module.image_filter.m0.b3_2.bn2.bias', 'module.image_filter.m0.b3_2.bn3.weight', 'module.image_filter.m0.b3_2.bn3.bias', 'module.image_filter.m0.b3_2.bn4.weight', 'module.image_filter.m0.b3_2.bn4.bias', 'module.image_filter.top_m_0.conv1.weight', 'module.image_filter.top_m_0.conv2.weight', 'module.image_filter.top_m_0.conv3.weight', 'module.image_filter.top_m_0.bn1.weight', 'module.image_filter.top_m_0.bn1.bias', 'module.image_filter.top_m_0.bn2.weight', 'module.image_filter.top_m_0.bn2.bias', 'module.image_filter.top_m_0.bn3.weight', 'module.image_filter.top_m_0.bn3.bias', 'module.image_filter.top_m_0.bn4.weight', 'module.image_filter.top_m_0.bn4.bias', 'module.image_filter.conv_last0.weight', 'module.image_filter.conv_last0.bias', 'module.image_filter.bn_end0.weight', 'module.image_filter.bn_end0.bias', 'module.image_filter.l0.weight', 'module.image_filter.l0.bias', 'module.image_filter.bl0.weight', 'module.image_filter.bl0.bias', 'module.image_filter.al0.weight', 'module.image_filter.al0.bias', 'module.image_filter.m1.b1_2.conv1.weight', 'module.image_filter.m1.b1_2.conv2.weight', 'module.image_filter.m1.b1_2.conv3.weight', 'module.image_filter.m1.b1_2.bn1.weight', 'module.image_filter.m1.b1_2.bn1.bias', 'module.image_filter.m1.b1_2.bn2.weight', 'module.image_filter.m1.b1_2.bn2.bias', 'module.image_filter.m1.b1_2.bn3.weight', 'module.image_filter.m1.b1_2.bn3.bias', 'module.image_filter.m1.b1_2.bn4.weight', 'module.image_filter.m1.b1_2.bn4.bias', 'module.image_filter.m1.b2_2.conv1.weight', 'module.image_filter.m1.b2_2.conv2.weight', 'module.image_filter.m1.b2_2.conv3.weight', 'module.image_filter.m1.b2_2.bn1.weight', 'module.image_filter.m1.b2_2.bn1.bias', 'module.image_filter.m1.b2_2.bn2.weight', 'module.image_filter.m1.b2_2.bn2.bias', 'module.image_filter.m1.b2_2.bn3.weight', 'module.image_filter.m1.b2_2.bn3.bias', 'module.image_filter.m1.b2_2.bn4.weight', 'module.image_filter.m1.b2_2.bn4.bias', 'module.image_filter.m1.b1_1.conv1.weight', 'module.image_filter.m1.b1_1.conv2.weight', 'module.image_filter.m1.b1_1.conv3.weight', 'module.image_filter.m1.b1_1.bn1.weight', 'module.image_filter.m1.b1_1.bn1.bias', 'module.image_filter.m1.b1_1.bn2.weight', 'module.image_filter.m1.b1_1.bn2.bias', 'module.image_filter.m1.b1_1.bn3.weight', 'module.image_filter.m1.b1_1.bn3.bias', 'module.image_filter.m1.b1_1.bn4.weight', 'module.image_filter.m1.b1_1.bn4.bias', 'module.image_filter.m1.b2_1.conv1.weight', 'module.image_filter.m1.b2_1.conv2.weight', 'module.image_filter.m1.b2_1.conv3.weight', 'module.image_filter.m1.b2_1.bn1.weight', 'module.image_filter.m1.b2_1.bn1.bias', 'module.image_filter.m1.b2_1.bn2.weight', 'module.image_filter.m1.b2_1.bn2.bias', 'module.image_filter.m1.b2_1.bn3.weight', 'module.image_filter.m1.b2_1.bn3.bias', 'module.image_filter.m1.b2_1.bn4.weight', 'module.image_filter.m1.b2_1.bn4.bias', 'module.image_filter.m1.b2_plus_1.conv1.weight', 'module.image_filter.m1.b2_plus_1.conv2.weight', 'module.image_filter.m1.b2_plus_1.conv3.weight', 'module.image_filter.m1.b2_plus_1.bn1.weight', 'module.image_filter.m1.b2_plus_1.bn1.bias', 'module.image_filter.m1.b2_plus_1.bn2.weight', 'module.image_filter.m1.b2_plus_1.bn2.bias', 'module.image_filter.m1.b2_plus_1.bn3.weight', 'module.image_filter.m1.b2_plus_1.bn3.bias', 'module.image_filter.m1.b2_plus_1.bn4.weight', 'module.image_filter.m1.b2_plus_1.bn4.bias', 'module.image_filter.m1.b3_1.conv1.weight', 'module.image_filter.m1.b3_1.conv2.weight', 'module.image_filter.m1.b3_1.conv3.weight', 'module.image_filter.m1.b3_1.bn1.weight', 'module.image_filter.m1.b3_1.bn1.bias', 'module.image_filter.m1.b3_1.bn2.weight', 'module.image_filter.m1.b3_1.bn2.bias', 'module.image_filter.m1.b3_1.bn3.weight', 'module.image_filter.m1.b3_1.bn3.bias', 'module.image_filter.m1.b3_1.bn4.weight', 'module.image_filter.m1.b3_1.bn4.bias', 'module.image_filter.m1.b3_2.conv1.weight', 'module.image_filter.m1.b3_2.conv2.weight', 'module.image_filter.m1.b3_2.conv3.weight', 'module.image_filter.m1.b3_2.bn1.weight', 'module.image_filter.m1.b3_2.bn1.bias', 'module.image_filter.m1.b3_2.bn2.weight', 'module.image_filter.m1.b3_2.bn2.bias', 'module.image_filter.m1.b3_2.bn3.weight', 'module.image_filter.m1.b3_2.bn3.bias', 'module.image_filter.m1.b3_2.bn4.weight', 'module.image_filter.m1.b3_2.bn4.bias', 'module.image_filter.top_m_1.conv1.weight', 'module.image_filter.top_m_1.conv2.weight', 'module.image_filter.top_m_1.conv3.weight', 'module.image_filter.top_m_1.bn1.weight', 'module.image_filter.top_m_1.bn1.bias', 'module.image_filter.top_m_1.bn2.weight', 'module.image_filter.top_m_1.bn2.bias', 'module.image_filter.top_m_1.bn3.weight', 'module.image_filter.top_m_1.bn3.bias', 'module.image_filter.top_m_1.bn4.weight', 'module.image_filter.top_m_1.bn4.bias', 'module.image_filter.conv_last1.weight', 'module.image_filter.conv_last1.bias', 'module.image_filter.bn_end1.weight', 'module.image_filter.bn_end1.bias', 'module.image_filter.l1.weight', 'module.image_filter.l1.bias', 'module.image_filter.bl1.weight', 'module.image_filter.bl1.bias', 'module.image_filter.al1.weight', 'module.image_filter.al1.bias', 'module.image_filter.m2.b1_2.conv1.weight', 'module.image_filter.m2.b1_2.conv2.weight', 'module.image_filter.m2.b1_2.conv3.weight', 'module.image_filter.m2.b1_2.bn1.weight', 'module.image_filter.m2.b1_2.bn1.bias', 'module.image_filter.m2.b1_2.bn2.weight', 'module.image_filter.m2.b1_2.bn2.bias', 'module.image_filter.m2.b1_2.bn3.weight', 'module.image_filter.m2.b1_2.bn3.bias', 'module.image_filter.m2.b1_2.bn4.weight', 'module.image_filter.m2.b1_2.bn4.bias', 'module.image_filter.m2.b2_2.conv1.weight', 'module.image_filter.m2.b2_2.conv2.weight', 'module.image_filter.m2.b2_2.conv3.weight', 'module.image_filter.m2.b2_2.bn1.weight', 'module.image_filter.m2.b2_2.bn1.bias', 'module.image_filter.m2.b2_2.bn2.weight', 'module.image_filter.m2.b2_2.bn2.bias', 'module.image_filter.m2.b2_2.bn3.weight', 'module.image_filter.m2.b2_2.bn3.bias', 'module.image_filter.m2.b2_2.bn4.weight', 'module.image_filter.m2.b2_2.bn4.bias', 'module.image_filter.m2.b1_1.conv1.weight', 'module.image_filter.m2.b1_1.conv2.weight', 'module.image_filter.m2.b1_1.conv3.weight', 'module.image_filter.m2.b1_1.bn1.weight', 'module.image_filter.m2.b1_1.bn1.bias', 'module.image_filter.m2.b1_1.bn2.weight', 'module.image_filter.m2.b1_1.bn2.bias', 'module.image_filter.m2.b1_1.bn3.weight', 'module.image_filter.m2.b1_1.bn3.bias', 'module.image_filter.m2.b1_1.bn4.weight', 'module.image_filter.m2.b1_1.bn4.bias', 'module.image_filter.m2.b2_1.conv1.weight', 'module.image_filter.m2.b2_1.conv2.weight', 'module.image_filter.m2.b2_1.conv3.weight', 'module.image_filter.m2.b2_1.bn1.weight', 'module.image_filter.m2.b2_1.bn1.bias', 'module.image_filter.m2.b2_1.bn2.weight', 'module.image_filter.m2.b2_1.bn2.bias', 'module.image_filter.m2.b2_1.bn3.weight', 'module.image_filter.m2.b2_1.bn3.bias', 'module.image_filter.m2.b2_1.bn4.weight', 'module.image_filter.m2.b2_1.bn4.bias', 'module.image_filter.m2.b2_plus_1.conv1.weight', 'module.image_filter.m2.b2_plus_1.conv2.weight', 'module.image_filter.m2.b2_plus_1.conv3.weight', 'module.image_filter.m2.b2_plus_1.bn1.weight', 'module.image_filter.m2.b2_plus_1.bn1.bias', 'module.image_filter.m2.b2_plus_1.bn2.weight', 'module.image_filter.m2.b2_plus_1.bn2.bias', 'module.image_filter.m2.b2_plus_1.bn3.weight', 'module.image_filter.m2.b2_plus_1.bn3.bias', 'module.image_filter.m2.b2_plus_1.bn4.weight', 'module.image_filter.m2.b2_plus_1.bn4.bias', 'module.image_filter.m2.b3_1.conv1.weight', 'module.image_filter.m2.b3_1.conv2.weight', 'module.image_filter.m2.b3_1.conv3.weight', 'module.image_filter.m2.b3_1.bn1.weight', 'module.image_filter.m2.b3_1.bn1.bias', 'module.image_filter.m2.b3_1.bn2.weight', 'module.image_filter.m2.b3_1.bn2.bias', 'module.image_filter.m2.b3_1.bn3.weight', 'module.image_filter.m2.b3_1.bn3.bias', 'module.image_filter.m2.b3_1.bn4.weight', 'module.image_filter.m2.b3_1.bn4.bias', 'module.image_filter.m2.b3_2.conv1.weight', 'module.image_filter.m2.b3_2.conv2.weight', 'module.image_filter.m2.b3_2.conv3.weight', 'module.image_filter.m2.b3_2.bn1.weight', 'module.image_filter.m2.b3_2.bn1.bias', 'module.image_filter.m2.b3_2.bn2.weight', 'module.image_filter.m2.b3_2.bn2.bias', 'module.image_filter.m2.b3_2.bn3.weight', 'module.image_filter.m2.b3_2.bn3.bias', 'module.image_filter.m2.b3_2.bn4.weight', 'module.image_filter.m2.b3_2.bn4.bias', 'module.image_filter.top_m_2.conv1.weight', 'module.image_filter.top_m_2.conv2.weight', 'module.image_filter.top_m_2.conv3.weight', 'module.image_filter.top_m_2.bn1.weight', 'module.image_filter.top_m_2.bn1.bias', 'module.image_filter.top_m_2.bn2.weight', 'module.image_filter.top_m_2.bn2.bias', 'module.image_filter.top_m_2.bn3.weight', 'module.image_filter.top_m_2.bn3.bias', 'module.image_filter.top_m_2.bn4.weight', 'module.image_filter.top_m_2.bn4.bias', 'module.image_filter.conv_last2.weight', 'module.image_filter.conv_last2.bias', 'module.image_filter.bn_end2.weight', 'module.image_filter.bn_end2.bias', 'module.image_filter.l2.weight', 'module.image_filter.l2.bias', 'module.image_filter.bl2.weight', 'module.image_filter.bl2.bias', 'module.image_filter.al2.weight', 'module.image_filter.al2.bias', 'module.image_filter.m3.b1_2.conv1.weight', 'module.image_filter.m3.b1_2.conv2.weight', 'module.image_filter.m3.b1_2.conv3.weight', 'module.image_filter.m3.b1_2.bn1.weight', 'module.image_filter.m3.b1_2.bn1.bias', 'module.image_filter.m3.b1_2.bn2.weight', 'module.image_filter.m3.b1_2.bn2.bias', 'module.image_filter.m3.b1_2.bn3.weight', 'module.image_filter.m3.b1_2.bn3.bias', 'module.image_filter.m3.b1_2.bn4.weight', 'module.image_filter.m3.b1_2.bn4.bias', 'module.image_filter.m3.b2_2.conv1.weight', 'module.image_filter.m3.b2_2.conv2.weight', 'module.image_filter.m3.b2_2.conv3.weight', 'module.image_filter.m3.b2_2.bn1.weight', 'module.image_filter.m3.b2_2.bn1.bias', 'module.image_filter.m3.b2_2.bn2.weight', 'module.image_filter.m3.b2_2.bn2.bias', 'module.image_filter.m3.b2_2.bn3.weight', 'module.image_filter.m3.b2_2.bn3.bias', 'module.image_filter.m3.b2_2.bn4.weight', 'module.image_filter.m3.b2_2.bn4.bias', 'module.image_filter.m3.b1_1.conv1.weight', 'module.image_filter.m3.b1_1.conv2.weight', 'module.image_filter.m3.b1_1.conv3.weight', 'module.image_filter.m3.b1_1.bn1.weight', 'module.image_filter.m3.b1_1.bn1.bias', 'module.image_filter.m3.b1_1.bn2.weight', 'module.image_filter.m3.b1_1.bn2.bias', 'module.image_filter.m3.b1_1.bn3.weight', 'module.image_filter.m3.b1_1.bn3.bias', 'module.image_filter.m3.b1_1.bn4.weight', 'module.image_filter.m3.b1_1.bn4.bias', 'module.image_filter.m3.b2_1.conv1.weight', 'module.image_filter.m3.b2_1.conv2.weight', 'module.image_filter.m3.b2_1.conv3.weight', 'module.image_filter.m3.b2_1.bn1.weight', 'module.image_filter.m3.b2_1.bn1.bias', 'module.image_filter.m3.b2_1.bn2.weight', 'module.image_filter.m3.b2_1.bn2.bias', 'module.image_filter.m3.b2_1.bn3.weight', 'module.image_filter.m3.b2_1.bn3.bias', 'module.image_filter.m3.b2_1.bn4.weight', 'module.image_filter.m3.b2_1.bn4.bias', 'module.image_filter.m3.b2_plus_1.conv1.weight', 'module.image_filter.m3.b2_plus_1.conv2.weight', 'module.image_filter.m3.b2_plus_1.conv3.weight', 'module.image_filter.m3.b2_plus_1.bn1.weight', 'module.image_filter.m3.b2_plus_1.bn1.bias', 'module.image_filter.m3.b2_plus_1.bn2.weight', 'module.image_filter.m3.b2_plus_1.bn2.bias', 'module.image_filter.m3.b2_plus_1.bn3.weight', 'module.image_filter.m3.b2_plus_1.bn3.bias', 'module.image_filter.m3.b2_plus_1.bn4.weight', 'module.image_filter.m3.b2_plus_1.bn4.bias', 'module.image_filter.m3.b3_1.conv1.weight', 'module.image_filter.m3.b3_1.conv2.weight', 'module.image_filter.m3.b3_1.conv3.weight', 'module.image_filter.m3.b3_1.bn1.weight', 'module.image_filter.m3.b3_1.bn1.bias', 'module.image_filter.m3.b3_1.bn2.weight', 'module.image_filter.m3.b3_1.bn2.bias', 'module.image_filter.m3.b3_1.bn3.weight', 'module.image_filter.m3.b3_1.bn3.bias', 'module.image_filter.m3.b3_1.bn4.weight', 'module.image_filter.m3.b3_1.bn4.bias', 'module.image_filter.m3.b3_2.conv1.weight', 'module.image_filter.m3.b3_2.conv2.weight', 'module.image_filter.m3.b3_2.conv3.weight', 'module.image_filter.m3.b3_2.bn1.weight', 'module.image_filter.m3.b3_2.bn1.bias', 'module.image_filter.m3.b3_2.bn2.weight', 'module.image_filter.m3.b3_2.bn2.bias', 'module.image_filter.m3.b3_2.bn3.weight', 'module.image_filter.m3.b3_2.bn3.bias', 'module.image_filter.m3.b3_2.bn4.weight', 'module.image_filter.m3.b3_2.bn4.bias', 'module.image_filter.top_m_3.conv1.weight', 'module.image_filter.top_m_3.conv2.weight', 'module.image_filter.top_m_3.conv3.weight', 'module.image_filter.top_m_3.bn1.weight', 'module.image_filter.top_m_3.bn1.bias', 'module.image_filter.top_m_3.bn2.weight', 'module.image_filter.top_m_3.bn2.bias', 'module.image_filter.top_m_3.bn3.weight', 'module.image_filter.top_m_3.bn3.bias', 'module.image_filter.top_m_3.bn4.weight', 'module.image_filter.top_m_3.bn4.bias', 'module.image_filter.conv_last3.weight', 'module.image_filter.conv_last3.bias', 'module.image_filter.bn_end3.weight', 'module.image_filter.bn_end3.bias', 'module.image_filter.l3.weight', 'module.image_filter.l3.bias', 'module.surface_classifier.conv0.weight', 'module.surface_classifier.conv0.bias', 'module.surface_classifier.conv1.weight', 'module.surface_classifier.conv1.bias', 'module.surface_classifier.conv2.weight', 'module.surface_classifier.conv2.bias', 'module.surface_classifier.conv3.weight', 'module.surface_classifier.conv3.bias', 'module.surface_classifier.conv4.weight', 'module.surface_classifier.conv4.bias', 'module.global_encoder.conv1.weight', 'module.global_encoder.bn1.weight', 'module.global_encoder.bn1.bias', 'module.global_encoder.bn1.running_mean', 'module.global_encoder.bn1.running_var', 'module.global_encoder.bn1.num_batches_tracked', 'module.global_encoder.layer1.0.conv1.weight', 'module.global_encoder.layer1.0.conv2.weight', 'module.global_encoder.layer1.1.conv1.weight', 'module.global_encoder.layer1.1.conv2.weight', 'module.global_encoder.layer2.0.conv1.weight', 'module.global_encoder.layer2.0.conv2.weight', 'module.global_encoder.layer2.0.downsample.0.weight', 'module.global_encoder.layer2.0.downsample.1.weight', 'module.global_encoder.layer2.0.downsample.1.bias', 'module.global_encoder.layer2.0.downsample.1.running_mean', 'module.global_encoder.layer2.0.downsample.1.running_var', 'module.global_encoder.layer2.0.downsample.1.num_batches_tracked', 'module.global_encoder.layer2.1.conv1.weight', 'module.global_encoder.layer2.1.conv2.weight', 'module.global_encoder.layer3.0.conv1.weight', 'module.global_encoder.layer3.0.conv2.weight', 'module.global_encoder.layer3.0.downsample.0.weight', 'module.global_encoder.layer3.0.downsample.1.weight', 'module.global_encoder.layer3.0.downsample.1.bias', 'module.global_encoder.layer3.0.downsample.1.running_mean', 'module.global_encoder.layer3.0.downsample.1.running_var', 'module.global_encoder.layer3.0.downsample.1.num_batches_tracked', 'module.global_encoder.layer3.1.conv1.weight', 'module.global_encoder.layer3.1.conv2.weight', 'module.global_encoder.layer4.0.conv1.weight', 'module.global_encoder.layer4.0.conv2.weight', 'module.global_encoder.layer4.0.downsample.0.weight', 'module.global_encoder.layer4.0.downsample.1.weight', 'module.global_encoder.layer4.0.downsample.1.bias', 'module.global_encoder.layer4.0.downsample.1.running_mean', 'module.global_encoder.layer4.0.downsample.1.running_var', 'module.global_encoder.layer4.0.downsample.1.num_batches_tracked', 'module.global_encoder.layer4.1.conv1.weight', 'module.global_encoder.layer4.1.conv2.weight', 'module.global_encoder.fc.weight', 'module.global_encoder.fc.bias'])

chethanchinder commented 6 months ago

@HaolinLiu97 Where can I find the model_best_bg.pth ? I could find only model_best.pth file in the link provided under best bg model. I renamed and tried to load the model. I am also getting the same error as discussed above.

HaolinLiu97 commented 6 months ago

@chethanchinder you can find it under the following link : https://cuhko365-my.sharepoint.com/personal/115010192_link_cuhk_edu_cn/_layouts/15/onedrive.aspx?id=%2Fpersonal%2F115010192%5Flink%5Fcuhk%5Fedu%5Fcn%2FDocuments%2FinstPIFu%5Fdata&ga=1

the link in readme file may be incorrect and I will check them later.

chethanchinder commented 6 months ago

@HaolinLiu97 Thank you for replying. I cannot see model_best_bg.pth under the link you have sent. Can you upload the model_best_bg.pth ?

HaolinLiu97 commented 6 months ago

I will update it within few hours. This is the correct link including all data. https://cuhko365-my.sharepoint.com/:f:/g/personal/115010192_link_cuhk_edu_cn/Eg99g4P1VMVJoZ5fz3lmDkABvj7Gc7yCjq-qBuYNqWjl2w?e=72lix4