Closed chenbokaix250 closed 3 years ago
Are you using the correct config file? i.e., the one used to train the tusimple
experiment. Which PyTorch version are you using?
Are you using the correct config file? i.e., the one used to train the
tusimple
experiment. Which PyTorch version are you using?
In config.yaml ,I only changed the path of the dataset。 I use the macOS 11.1 ,torch version 1.7.1 ,python version 3.8.7
complete output: python3 test.py --exp_name tusimple --cfg config.yaml --epoch 2695 [2021-03-10 08:48:50,049] [INFO] Experiment name: tusimple [2021-03-10 08:48:50,050] [INFO] Config:
exps_dir: 'experiments/' iter_log_interval: 1 iter_time_window: 100 model_save_interval: 1 seed: 1 backup: model: name: PolyRegression parameters: num_outputs: 35 # (5 lanes) * (1 conf + 2 (upper & lower) + 4 poly coeffs) pretrained: true backbone: 'resnet50' pred_category: false curriculum_steps: [0, 0, 0, 0] loss_parameters: conf_weight: 1 lower_weight: 1 upper_weight: 1 cls_weight: 0 poly_weight: 300 batch_size: 16 epochs: 2695 optimizer: name: Adam parameters: lr: 3.0e-4 lr_scheduler: name: CosineAnnealingLR parameters: T_max: 385
test_parameters: conf_threshold: 0.5
datasets: train: type: LaneDataset parameters: dataset: tusimple split: train img_size: [360, 640] normalize: true aug_chance: 0.9090909090909091 # 10/11 augmentations:
name: CropToFixedSize parameters: width: 1152 height: 648 root: "/Users/bokaichen/Desktop/PolyLaneNet-master/tusimple"
test: &test type: LaneDataset parameters: dataset: tusimple split: val max_lanes: 5 img_size: [360, 640] root: "/Users/bokaichen/Desktop/PolyLaneNet-master/tusimple" normalize: true augmentations: []
val: <<: *test
[2021-03-10 08:48:50,050] [INFO] Args:
Namespace(batch_size=None, cfg='config.yaml', epoch=2695, exp_name='tusimple', view=False)
total annos 358
Transforming annotations...
Done.
fatal: not a git repository (or any of the parent directories): .git
warning: Not a git repository. Use --no-index to compare two paths outside a working tree
usage: git diff --no-index [
Diff output format options
-p, --patch generate patch
-s, --no-patch suppress diff output
-u generate patch
-U, --unified[=
Diff rename options
-B, --break-rewrites[=
Diff algorithm options
--minimal produce the smallest possible diff
-w, --ignore-all-space
ignore whitespace when comparing lines
-b, --ignore-space-change
ignore changes in amount of whitespace
--ignore-space-at-eol
ignore changes in whitespace at EOL
--ignore-cr-at-eol ignore carrier-return at the end of line
--ignore-blank-lines ignore changes whose lines are all blank
--indent-heuristic heuristic to shift diff hunk boundaries for easy reading
--patience generate diff using the "patience diff" algorithm
--histogram generate diff using the "histogram diff" algorithm
--diff-algorithm
Other diff options
--relative[=
[2021-03-10 08:48:50,777] [INFO] Code state: Git hash:
Git diff:
[2021-03-10 08:48:50,778] [INFO] Starting testing.
[2021-03-10 08:48:51,355] [ERROR] Uncaught exception
Traceback (most recent call last):
File "test.py", line 162, in
Are you using the correct config file? i.e., the one used to train the
tusimple
experiment. Which PyTorch version are you using?
I found the reason.
model.load_state_dict(torch.load(os.path.join(exp_root, "models", "model_{:03d}.pt".format(epoch)))['model'])
cause the error
change to
model.load_state_dict(torch.load(os.path.join(exp_root, "models", "model_{:03d}.pt".format(epoch)))['model'],False)
problem solved!
thank you for your works!
when I executed this:
python3 test.py --exp_name tusimple --cfg config.yaml --epoch 2695
[2021-03-09 08:49:42,416] [INFO] Starting testing. [2021-03-09 08:49:42,583] [ERROR] Uncaught exception Traceback (most recent call last): File "test.py", line 159, in
_, mean_loss = test(model, test_loader, evaluator, exp_root, cfg, epoch=test_epoch, view=args.view)
File "test.py", line 23, in test
model.load_state_dict(torch.load(os.path.join(exproot, "models", "model{:03d}.pt".format(epoch)))['model'])
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1224, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for PolyRegression:
Missing key(s) in state_dict: "model._conv_stem.weight", "model._bn0.weight", "model._bn0.bias", "model._bn0.running_mean", "model._bn0.running_var", "model._blocks.0._depthwise_conv.weight", "model._blocks.0._bn1.weight", "model._blocks.0._bn1.bias", "model._blocks.0._bn1.running_mean", "model._blocks.0._bn1.running_var", "model._blocks.0._se_reduce.weight", "model._blocks.0._se_reduce.bias", "model._blocks.0._se_expand.weight", "model._blocks.0._se_expand.bias", "model._blocks.0._project_conv.weight", "model._blocks.0._bn2.weight", "model._blocks.0._bn2.bias", "model._blocks.0._bn2.running_mean", "model._blocks.0._bn2.running_var", "model._blocks.1._expand_conv.weight", "model._blocks.1._bn0.weight", "model._blocks.1._bn0.bias", "model._blocks.1._bn0.running_mean", "model._blocks.1._bn0.running_var", "model._blocks.1._depthwise_conv.weight", "model._blocks.1._bn1.weight", "model._blocks.1._bn1.bias", "model._blocks.1._bn1.running_mean", "model._blocks.1._bn1.running_var", "model._blocks.1._se_reduce.weight", "model._blocks.1._se_reduce.bias", "model._blocks.1._se_expand.weight", "model._blocks.1._se_expand.bias", "model._blocks.1._project_conv.weight", "model._blocks.1._bn2.weight", "model._blocks.1._bn2.bias", "model._blocks.1._bn2.running_mean", "model._blocks.1._bn2.running_var", "model._blocks.2._expand_conv.weight", "model._blocks.2._bn0.weight", "model._blocks.2._bn0.bias", "model._blocks.2._bn0.running_mean", "model._blocks.2._bn0.running_var", "model._blocks.2._depthwise_conv.weight", "model._blocks.2._bn1.weight", "model._blocks.2._bn1.bias", "model._blocks.2._bn1.running_mean", "model._blocks.2._bn1.running_var", "model._blocks.2._se_reduce.weight", "model._blocks.2._se_reduce.bias", "model._blocks.2._se_expand.weight", "model._blocks.2._se_expand.bias", "model._blocks.2._project_conv.weight", "model._blocks.2._bn2.weight", "model._blocks.2._bn2.bias", "model._blocks.2._bn2.running_mean", "model._blocks.2._bn2.running_var", "model._blocks.3._expand_conv.weight", "model._blocks.3._bn0.weight", "model._blocks.3._bn0.bias", "model._blocks.3._bn0.running_mean", "model._blocks.3._bn0.running_var", "model._blocks.3._depthwise_conv.weight", "model._blocks.3._bn1.weight", "model._blocks.3._bn1.bias", "model._blocks.3._bn1.running_mean", "model._blocks.3._bn1.running_var", "model._blocks.3._se_reduce.weight", "model._blocks.3._se_reduce.bias", "model._blocks.3._se_expand.weight", "model._blocks.3._se_expand.bias", "model._blocks.3._project_conv.weight", "model._blocks.3._bn2.weight", "model._blocks.3._bn2.bias", "model._blocks.3._bn2.running_mean", "model._blocks.3._bn2.running_var", "model._blocks.4._expand_conv.weight", "model._blocks.4._bn0.weight", "model._blocks.4._bn0.bias", "model._blocks.4._bn0.running_mean", "model._blocks.4._bn0.running_var", "model._blocks.4._depthwise_conv.weight", "model._blocks.4._bn1.weight", "model._blocks.4._bn1.bias", "model._blocks.4._bn1.running_mean", "model._blocks.4._bn1.running_var", "model._blocks.4._se_reduce.weight", "model._blocks.4._se_reduce.bias", "model._blocks.4._se_expand.weight", "model._blocks.4._se_expand.bias", "model._blocks.4._project_conv.weight", "model._blocks.4._bn2.weight", "model._blocks.4._bn2.bias", "model._blocks.4._bn2.running_mean", "model._blocks.4._bn2.running_var", "model._blocks.5._expand_conv.weight", "model._blocks.5._bn0.weight", "model._blocks.5._bn0.bias", "model._blocks.5._bn0.running_mean", "model._blocks.5._bn0.running_var", "model._blocks.5._depthwise_conv.weight", "model._blocks.5._bn1.weight", "model._blocks.5._bn1.bias", "model._blocks.5._bn1.running_mean", "model._blocks.5._bn1.running_var", "model._blocks.5._se_reduce.weight", "model._blocks.5._se_reduce.bias", "model._blocks.5._se_expand.weight", "model._blocks.5._se_expand.bias", "model._blocks.5._project_conv.weight", "model._blocks.5._bn2.weight", "model._blocks.5._bn2.bias", "model._blocks.5._bn2.running_mean", "model._blocks.5._bn2.running_var", "model._blocks.6._expand_conv.weight", "model._blocks.6._bn0.weight", "model._blocks.6._bn0.bias", "model._blocks.6._bn0.running_mean", "model._blocks.6._bn0.running_var", "model._blocks.6._depthwise_conv.weight", "model._blocks.6._bn1.weight", "model._blocks.6._bn1.bias", "model._blocks.6._bn1.running_mean", "model._blocks.6._bn1.running_var", "model._blocks.6._se_reduce.weight", "model._blocks.6._se_reduce.bias", "model._blocks.6._se_expand.weight", "model._blocks.6._se_expand.bias", "model._blocks.6._project_conv.weight", "model._blocks.6._bn2.weight", "model._blocks.6._bn2.bias", "model._blocks.6._bn2.running_mean", "model._blocks.6._bn2.running_var", "model._blocks.7._expand_conv.weight", "model._blocks.7._bn0.weight", "model._blocks.7._bn0.bias", "model._blocks.7._bn0.running_mean", "model._blocks.7._bn0.running_var", "model._blocks.7._depthwise_conv.weight", "model._blocks.7._bn1.weight", "model._blocks.7._bn1.bias", "model._blocks.7._bn1.running_mean", "model._blocks.7._bn1.running_var", "model._blocks.7._se_reduce.weight", "model._blocks.7._se_reduce.bias", "model._blocks.7._se_expand.weight", "model._blocks.7._se_expand.bias", "model._blocks.7._project_conv.weight", "model._blocks.7._bn2.weight", "model._blocks.7._bn2.bias", "model._blocks.7._bn2.running_mean", "model._blocks.7._bn2.running_var", "model._blocks.8._expand_conv.weight", "model._blocks.8._bn0.weight", "model._blocks.8._bn0.bias", "model._blocks.8._bn0.running_mean", "model._blocks.8._bn0.running_var", "model._blocks.8._depthwise_conv.weight", "model._blocks.8._bn1.weight", "model._blocks.8._bn1.bias", "model._blocks.8._bn1.running_mean", "model._blocks.8._bn1.running_var", "model._blocks.8._se_reduce.weight", "model._blocks.8._se_reduce.bias", "model._blocks.8._se_expand.weight", "model._blocks.8._se_expand.bias", "model._blocks.8._project_conv.weight", "model._blocks.8._bn2.weight", "model._blocks.8._bn2.bias", "model._blocks.8._bn2.running_mean", "model._blocks.8._bn2.running_var", "model._blocks.9._expand_conv.weight", "model._blocks.9._bn0.weight", "model._blocks.9._bn0.bias", "model._blocks.9._bn0.running_mean", "model._blocks.9._bn0.running_var", "model._blocks.9._depthwise_conv.weight", "model._blocks.9._bn1.weight", "model._blocks.9._bn1.bias", "model._blocks.9._bn1.running_mean", "model._blocks.9._bn1.running_var", "model._blocks.9._se_reduce.weight", "model._blocks.9._se_reduce.bias", "model._blocks.9._se_expand.weight", "model._blocks.9._se_expand.bias", "model._blocks.9._project_conv.weight", "model._blocks.9._bn2.weight", "model._blocks.9._bn2.bias", "model._blocks.9._bn2.running_mean", "model._blocks.9._bn2.running_var", "model._blocks.10._expand_conv.weight", "model._blocks.10._bn0.weight", "model._blocks.10._bn0.bias", "model._blocks.10._bn0.running_mean", "model._blocks.10._bn0.running_var", "model._blocks.10._depthwise_conv.weight", "model._blocks.10._bn1.weight", "model._blocks.10._bn1.bias", "model._blocks.10._bn1.running_mean", "model._blocks.10._bn1.running_var", "model._blocks.10._se_reduce.weight", "model._blocks.10._se_reduce.bias", "model._blocks.10._se_expand.weight", "model._blocks.10._se_expand.bias", "model._blocks.10._project_conv.weight", "model._blocks.10._bn2.weight", "model._blocks.10._bn2.bias", "model._blocks.10._bn2.running_mean", "model._blocks.10._bn2.running_var", "model._blocks.11._expand_conv.weight", "model._blocks.11._bn0.weight", "model._blocks.11._bn0.bias", "model._blocks.11._bn0.running_mean", "model._blocks.11._bn0.running_var", "model._blocks.11._depthwise_conv.weight", "model._blocks.11._bn1.weight", "model._blocks.11._bn1.bias", "model._blocks.11._bn1.running_mean", "model._blocks.11._bn1.running_var", "model._blocks.11._se_reduce.weight", "model._blocks.11._se_reduce.bias", "model._blocks.11._se_expand.weight", "model._blocks.11._se_expand.bias", "model._blocks.11._project_conv.weight", "model._blocks.11._bn2.weight", "model._blocks.11._bn2.bias", "model._blocks.11._bn2.running_mean", "model._blocks.11._bn2.running_var", "model._blocks.12._expand_conv.weight", "model._blocks.12._bn0.weight", "model._blocks.12._bn0.bias", "model._blocks.12._bn0.running_mean", "model._blocks.12._bn0.running_var", "model._blocks.12._depthwise_conv.weight", "model._blocks.12._bn1.weight", "model._blocks.12._bn1.bias", "model._blocks.12._bn1.running_mean", "model._blocks.12._bn1.running_var", "model._blocks.12._se_reduce.weight", "model._blocks.12._se_reduce.bias", "model._blocks.12._se_expand.weight", "model._blocks.12._se_expand.bias", "model._blocks.12._project_conv.weight", "model._blocks.12._bn2.weight", "model._blocks.12._bn2.bias", "model._blocks.12._bn2.running_mean", "model._blocks.12._bn2.running_var", "model._blocks.13._expand_conv.weight", "model._blocks.13._bn0.weight", "model._blocks.13._bn0.bias", "model._blocks.13._bn0.running_mean", "model._blocks.13._bn0.running_var", "model._blocks.13._depthwise_conv.weight", "model._blocks.13._bn1.weight", "model._blocks.13._bn1.bias", "model._blocks.13._bn1.running_mean", "model._blocks.13._bn1.running_var", "model._blocks.13._se_reduce.weight", "model._blocks.13._se_reduce.bias", "model._blocks.13._se_expand.weight", "model._blocks.13._se_expand.bias", "model._blocks.13._project_conv.weight", "model._blocks.13._bn2.weight", "model._blocks.13._bn2.bias", "model._blocks.13._bn2.running_mean", "model._blocks.13._bn2.running_var", "model._blocks.14._expand_conv.weight", "model._blocks.14._bn0.weight", "model._blocks.14._bn0.bias", "model._blocks.14._bn0.running_mean", "model._blocks.14._bn0.running_var", "model._blocks.14._depthwise_conv.weight", "model._blocks.14._bn1.weight", "model._blocks.14._bn1.bias", "model._blocks.14._bn1.running_mean", "model._blocks.14._bn1.running_var", "model._blocks.14._se_reduce.weight", "model._blocks.14._se_reduce.bias", "model._blocks.14._se_expand.weight", "model._blocks.14._se_expand.bias", "model._blocks.14._project_conv.weight", "model._blocks.14._bn2.weight", "model._blocks.14._bn2.bias", "model._blocks.14._bn2.running_mean", "model._blocks.14._bn2.running_var", "model._blocks.15._expand_conv.weight", "model._blocks.15._bn0.weight", "model._blocks.15._bn0.bias", "model._blocks.15._bn0.running_mean", "model._blocks.15._bn0.running_var", "model._blocks.15._depthwise_conv.weight", "model._blocks.15._bn1.weight", "model._blocks.15._bn1.bias", "model._blocks.15._bn1.running_mean", "model._blocks.15._bn1.running_var", "model._blocks.15._se_reduce.weight", "model._blocks.15._se_reduce.bias", "model._blocks.15._se_expand.weight", "model._blocks.15._se_expand.bias", "model._blocks.15._project_conv.weight", "model._blocks.15._bn2.weight", "model._blocks.15._bn2.bias", "model._blocks.15._bn2.running_mean", "model._blocks.15._bn2.running_var", "model._conv_head.weight", "model._bn1.weight", "model._bn1.bias", "model._bn1.running_mean", "model._bn1.running_var", "model._fc.regular_outputs_layer.weight", "model._fc.regular_outputs_layer.bias".
Unexpected key(s) in state_dict: "model.conv1.weight", "model.bn1.weight", "model.bn1.bias", "model.bn1.running_mean", "model.bn1.running_var", "model.bn1.num_batches_tracked", "model.layer1.0.conv1.weight", "model.layer1.0.bn1.weight", "model.layer1.0.bn1.bias", "model.layer1.0.bn1.running_mean", "model.layer1.0.bn1.running_var", "model.layer1.0.bn1.num_batches_tracked", "model.layer1.0.conv2.weight", "model.layer1.0.bn2.weight", "model.layer1.0.bn2.bias", "model.layer1.0.bn2.running_mean", "model.layer1.0.bn2.running_var", "model.layer1.0.bn2.num_batches_tracked", "model.layer1.1.conv1.weight", "model.layer1.1.bn1.weight", "model.layer1.1.bn1.bias", "model.layer1.1.bn1.running_mean", "model.layer1.1.bn1.running_var", "model.layer1.1.bn1.num_batches_tracked", "model.layer1.1.conv2.weight", "model.layer1.1.bn2.weight", "model.layer1.1.bn2.bias", "model.layer1.1.bn2.running_mean", "model.layer1.1.bn2.running_var", "model.layer1.1.bn2.num_batches_tracked", "model.layer1.2.conv1.weight", "model.layer1.2.bn1.weight", "model.layer1.2.bn1.bias", "model.layer1.2.bn1.running_mean", "model.layer1.2.bn1.running_var", "model.layer1.2.bn1.num_batches_tracked", "model.layer1.2.conv2.weight", "model.layer1.2.bn2.weight", "model.layer1.2.bn2.bias", "model.layer1.2.bn2.running_mean", "model.layer1.2.bn2.running_var", "model.layer1.2.bn2.num_batches_tracked", "model.layer2.0.conv1.weight", "model.layer2.0.bn1.weight", "model.layer2.0.bn1.bias", "model.layer2.0.bn1.running_mean", "model.layer2.0.bn1.running_var", "model.layer2.0.bn1.num_batches_tracked", "model.layer2.0.conv2.weight", "model.layer2.0.bn2.weight", "model.layer2.0.bn2.bias", "model.layer2.0.bn2.running_mean", "model.layer2.0.bn2.running_var", "model.layer2.0.bn2.num_batches_tracked", "model.layer2.0.downsample.0.weight", "model.layer2.0.downsample.1.weight", "model.layer2.0.downsample.1.bias", "model.layer2.0.downsample.1.running_mean", "model.layer2.0.downsample.1.running_var", "model.layer2.0.downsample.1.num_batches_tracked", "model.layer2.1.conv1.weight", "model.layer2.1.bn1.weight", "model.layer2.1.bn1.bias", "model.layer2.1.bn1.running_mean", "model.layer2.1.bn1.running_var", "model.layer2.1.bn1.num_batches_tracked", "model.layer2.1.conv2.weight", "model.layer2.1.bn2.weight", "model.layer2.1.bn2.bias", "model.layer2.1.bn2.running_mean", "model.layer2.1.bn2.running_var", "model.layer2.1.bn2.num_batches_tracked", "model.layer2.2.conv1.weight", "model.layer2.2.bn1.weight", "model.layer2.2.bn1.bias", "model.layer2.2.bn1.running_mean", "model.layer2.2.bn1.running_var", "model.layer2.2.bn1.num_batches_tracked", "model.layer2.2.conv2.weight", "model.layer2.2.bn2.weight", "model.layer2.2.bn2.bias", "model.layer2.2.bn2.running_mean", "model.layer2.2.bn2.running_var", "model.layer2.2.bn2.num_batches_tracked", "model.layer2.3.conv1.weight", "model.layer2.3.bn1.weight", "model.layer2.3.bn1.bias", "model.layer2.3.bn1.running_mean", "model.layer2.3.bn1.running_var", "model.layer2.3.bn1.num_batches_tracked", "model.layer2.3.conv2.weight", "model.layer2.3.bn2.weight", "model.layer2.3.bn2.bias", "model.layer2.3.bn2.running_mean", "model.layer2.3.bn2.running_var", "model.layer2.3.bn2.num_batches_tracked", "model.layer3.0.conv1.weight", "model.layer3.0.bn1.weight", "model.layer3.0.bn1.bias", "model.layer3.0.bn1.running_mean", "model.layer3.0.bn1.running_var", "model.layer3.0.bn1.num_batches_tracked", "model.layer3.0.conv2.weight", "model.layer3.0.bn2.weight", "model.layer3.0.bn2.bias", "model.layer3.0.bn2.running_mean", "model.layer3.0.bn2.running_var", "model.layer3.0.bn2.num_batches_tracked", "model.layer3.0.downsample.0.weight", "model.layer3.0.downsample.1.weight", "model.layer3.0.downsample.1.bias", "model.layer3.0.downsample.1.running_mean", "model.layer3.0.downsample.1.running_var", "model.layer3.0.downsample.1.num_batches_tracked", "model.layer3.1.conv1.weight", "model.layer3.1.bn1.weight", "model.layer3.1.bn1.bias", "model.layer3.1.bn1.running_mean", "model.layer3.1.bn1.running_var", "model.layer3.1.bn1.num_batches_tracked", "model.layer3.1.conv2.weight", "model.layer3.1.bn2.weight", "model.layer3.1.bn2.bias", "model.layer3.1.bn2.running_mean", "model.layer3.1.bn2.running_var", "model.layer3.1.bn2.num_batches_tracked", "model.layer3.2.conv1.weight", "model.layer3.2.bn1.weight", "model.layer3.2.bn1.bias", "model.layer3.2.bn1.running_mean", "model.layer3.2.bn1.running_var", "model.layer3.2.bn1.num_batches_tracked", "model.layer3.2.conv2.weight", "model.layer3.2.bn2.weight", "model.layer3.2.bn2.bias", "model.layer3.2.bn2.running_mean", "model.layer3.2.bn2.running_var", "model.layer3.2.bn2.num_batches_tracked", "model.layer3.3.conv1.weight", "model.layer3.3.bn1.weight", "model.layer3.3.bn1.bias", "model.layer3.3.bn1.running_mean", "model.layer3.3.bn1.running_var", "model.layer3.3.bn1.num_batches_tracked", "model.layer3.3.conv2.weight", "model.layer3.3.bn2.weight", "model.layer3.3.bn2.bias", "model.layer3.3.bn2.running_mean", "model.layer3.3.bn2.running_var", "model.layer3.3.bn2.num_batches_tracked", "model.layer3.4.conv1.weight", "model.layer3.4.bn1.weight", "model.layer3.4.bn1.bias", "model.layer3.4.bn1.running_mean", "model.layer3.4.bn1.running_var", "model.layer3.4.bn1.num_batches_tracked", "model.layer3.4.conv2.weight", "model.layer3.4.bn2.weight", "model.layer3.4.bn2.bias", "model.layer3.4.bn2.running_mean", "model.layer3.4.bn2.running_var", "model.layer3.4.bn2.num_batches_tracked", "model.layer3.5.conv1.weight", "model.layer3.5.bn1.weight", "model.layer3.5.bn1.bias", "model.layer3.5.bn1.running_mean", "model.layer3.5.bn1.running_var", "model.layer3.5.bn1.num_batches_tracked", "model.layer3.5.conv2.weight", "model.layer3.5.bn2.weight", "model.layer3.5.bn2.bias", "model.layer3.5.bn2.running_mean", "model.layer3.5.bn2.running_var", "model.layer3.5.bn2.num_batches_tracked", "model.layer4.0.conv1.weight", "model.layer4.0.bn1.weight", "model.layer4.0.bn1.bias", "model.layer4.0.bn1.running_mean", "model.layer4.0.bn1.running_var", "model.layer4.0.bn1.num_batches_tracked", "model.layer4.0.conv2.weight", "model.layer4.0.bn2.weight", "model.layer4.0.bn2.bias", "model.layer4.0.bn2.running_mean", "model.layer4.0.bn2.running_var", "model.layer4.0.bn2.num_batches_tracked", "model.layer4.0.downsample.0.weight", "model.layer4.0.downsample.1.weight", "model.layer4.0.downsample.1.bias", "model.layer4.0.downsample.1.running_mean", "model.layer4.0.downsample.1.running_var", "model.layer4.0.downsample.1.num_batches_tracked", "model.layer4.1.conv1.weight", "model.layer4.1.bn1.weight", "model.layer4.1.bn1.bias", "model.layer4.1.bn1.running_mean", "model.layer4.1.bn1.running_var", "model.layer4.1.bn1.num_batches_tracked", "model.layer4.1.conv2.weight", "model.layer4.1.bn2.weight", "model.layer4.1.bn2.bias", "model.layer4.1.bn2.running_mean", "model.layer4.1.bn2.running_var", "model.layer4.1.bn2.num_batches_tracked", "model.layer4.2.conv1.weight", "model.layer4.2.bn1.weight", "model.layer4.2.bn1.bias", "model.layer4.2.bn1.running_mean", "model.layer4.2.bn1.running_var", "model.layer4.2.bn1.num_batches_tracked", "model.layer4.2.conv2.weight", "model.layer4.2.bn2.weight", "model.layer4.2.bn2.bias", "model.layer4.2.bn2.running_mean", "model.layer4.2.bn2.running_var", "model.layer4.2.bn2.num_batches_tracked", "model.fc.regular_outputs_layer.weight", "model.fc.regular_outputs_layer.bias".
load_state_dict maybe cause problem
how can i solve this?
thank you