lucastabelini / PolyLaneNet

Code for the paper entitled "PolyLaneNet: Lane Estimation via Deep Polynomial Regression" (ICPR 2020)
https://arxiv.org/abs/2004.10924
MIT License
295 stars 74 forks source link

A problem about Test.py #43

Closed chenbokaix250 closed 3 years ago

chenbokaix250 commented 3 years ago

when I executed this:

python3 test.py --exp_name tusimple --cfg config.yaml --epoch 2695

[2021-03-09 08:49:42,416] [INFO] Starting testing. [2021-03-09 08:49:42,583] [ERROR] Uncaught exception Traceback (most recent call last): File "test.py", line 159, in _, mean_loss = test(model, test_loader, evaluator, exp_root, cfg, epoch=test_epoch, view=args.view) File "test.py", line 23, in test model.load_state_dict(torch.load(os.path.join(exproot, "models", "model{:03d}.pt".format(epoch)))['model']) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1224, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for PolyRegression: Missing key(s) in state_dict: "model._conv_stem.weight", "model._bn0.weight", "model._bn0.bias", "model._bn0.running_mean", "model._bn0.running_var", "model._blocks.0._depthwise_conv.weight", "model._blocks.0._bn1.weight", "model._blocks.0._bn1.bias", "model._blocks.0._bn1.running_mean", "model._blocks.0._bn1.running_var", "model._blocks.0._se_reduce.weight", "model._blocks.0._se_reduce.bias", "model._blocks.0._se_expand.weight", "model._blocks.0._se_expand.bias", "model._blocks.0._project_conv.weight", "model._blocks.0._bn2.weight", "model._blocks.0._bn2.bias", "model._blocks.0._bn2.running_mean", "model._blocks.0._bn2.running_var", "model._blocks.1._expand_conv.weight", "model._blocks.1._bn0.weight", "model._blocks.1._bn0.bias", "model._blocks.1._bn0.running_mean", "model._blocks.1._bn0.running_var", "model._blocks.1._depthwise_conv.weight", "model._blocks.1._bn1.weight", "model._blocks.1._bn1.bias", "model._blocks.1._bn1.running_mean", "model._blocks.1._bn1.running_var", "model._blocks.1._se_reduce.weight", "model._blocks.1._se_reduce.bias", "model._blocks.1._se_expand.weight", "model._blocks.1._se_expand.bias", "model._blocks.1._project_conv.weight", "model._blocks.1._bn2.weight", "model._blocks.1._bn2.bias", "model._blocks.1._bn2.running_mean", "model._blocks.1._bn2.running_var", "model._blocks.2._expand_conv.weight", "model._blocks.2._bn0.weight", "model._blocks.2._bn0.bias", "model._blocks.2._bn0.running_mean", "model._blocks.2._bn0.running_var", "model._blocks.2._depthwise_conv.weight", "model._blocks.2._bn1.weight", "model._blocks.2._bn1.bias", "model._blocks.2._bn1.running_mean", "model._blocks.2._bn1.running_var", "model._blocks.2._se_reduce.weight", "model._blocks.2._se_reduce.bias", "model._blocks.2._se_expand.weight", "model._blocks.2._se_expand.bias", "model._blocks.2._project_conv.weight", "model._blocks.2._bn2.weight", "model._blocks.2._bn2.bias", "model._blocks.2._bn2.running_mean", "model._blocks.2._bn2.running_var", "model._blocks.3._expand_conv.weight", "model._blocks.3._bn0.weight", "model._blocks.3._bn0.bias", "model._blocks.3._bn0.running_mean", "model._blocks.3._bn0.running_var", "model._blocks.3._depthwise_conv.weight", "model._blocks.3._bn1.weight", "model._blocks.3._bn1.bias", "model._blocks.3._bn1.running_mean", "model._blocks.3._bn1.running_var", "model._blocks.3._se_reduce.weight", "model._blocks.3._se_reduce.bias", "model._blocks.3._se_expand.weight", "model._blocks.3._se_expand.bias", "model._blocks.3._project_conv.weight", "model._blocks.3._bn2.weight", "model._blocks.3._bn2.bias", "model._blocks.3._bn2.running_mean", "model._blocks.3._bn2.running_var", "model._blocks.4._expand_conv.weight", "model._blocks.4._bn0.weight", "model._blocks.4._bn0.bias", "model._blocks.4._bn0.running_mean", "model._blocks.4._bn0.running_var", "model._blocks.4._depthwise_conv.weight", "model._blocks.4._bn1.weight", "model._blocks.4._bn1.bias", "model._blocks.4._bn1.running_mean", "model._blocks.4._bn1.running_var", "model._blocks.4._se_reduce.weight", "model._blocks.4._se_reduce.bias", "model._blocks.4._se_expand.weight", "model._blocks.4._se_expand.bias", "model._blocks.4._project_conv.weight", "model._blocks.4._bn2.weight", "model._blocks.4._bn2.bias", "model._blocks.4._bn2.running_mean", "model._blocks.4._bn2.running_var", "model._blocks.5._expand_conv.weight", "model._blocks.5._bn0.weight", "model._blocks.5._bn0.bias", "model._blocks.5._bn0.running_mean", "model._blocks.5._bn0.running_var", "model._blocks.5._depthwise_conv.weight", "model._blocks.5._bn1.weight", "model._blocks.5._bn1.bias", "model._blocks.5._bn1.running_mean", "model._blocks.5._bn1.running_var", "model._blocks.5._se_reduce.weight", "model._blocks.5._se_reduce.bias", "model._blocks.5._se_expand.weight", "model._blocks.5._se_expand.bias", "model._blocks.5._project_conv.weight", "model._blocks.5._bn2.weight", "model._blocks.5._bn2.bias", "model._blocks.5._bn2.running_mean", "model._blocks.5._bn2.running_var", "model._blocks.6._expand_conv.weight", "model._blocks.6._bn0.weight", "model._blocks.6._bn0.bias", "model._blocks.6._bn0.running_mean", "model._blocks.6._bn0.running_var", "model._blocks.6._depthwise_conv.weight", "model._blocks.6._bn1.weight", "model._blocks.6._bn1.bias", "model._blocks.6._bn1.running_mean", "model._blocks.6._bn1.running_var", "model._blocks.6._se_reduce.weight", "model._blocks.6._se_reduce.bias", "model._blocks.6._se_expand.weight", "model._blocks.6._se_expand.bias", "model._blocks.6._project_conv.weight", "model._blocks.6._bn2.weight", "model._blocks.6._bn2.bias", "model._blocks.6._bn2.running_mean", "model._blocks.6._bn2.running_var", "model._blocks.7._expand_conv.weight", "model._blocks.7._bn0.weight", "model._blocks.7._bn0.bias", "model._blocks.7._bn0.running_mean", "model._blocks.7._bn0.running_var", "model._blocks.7._depthwise_conv.weight", "model._blocks.7._bn1.weight", "model._blocks.7._bn1.bias", "model._blocks.7._bn1.running_mean", "model._blocks.7._bn1.running_var", "model._blocks.7._se_reduce.weight", "model._blocks.7._se_reduce.bias", "model._blocks.7._se_expand.weight", "model._blocks.7._se_expand.bias", "model._blocks.7._project_conv.weight", "model._blocks.7._bn2.weight", "model._blocks.7._bn2.bias", "model._blocks.7._bn2.running_mean", "model._blocks.7._bn2.running_var", "model._blocks.8._expand_conv.weight", "model._blocks.8._bn0.weight", "model._blocks.8._bn0.bias", "model._blocks.8._bn0.running_mean", "model._blocks.8._bn0.running_var", "model._blocks.8._depthwise_conv.weight", "model._blocks.8._bn1.weight", "model._blocks.8._bn1.bias", "model._blocks.8._bn1.running_mean", "model._blocks.8._bn1.running_var", "model._blocks.8._se_reduce.weight", "model._blocks.8._se_reduce.bias", "model._blocks.8._se_expand.weight", "model._blocks.8._se_expand.bias", "model._blocks.8._project_conv.weight", "model._blocks.8._bn2.weight", "model._blocks.8._bn2.bias", "model._blocks.8._bn2.running_mean", "model._blocks.8._bn2.running_var", "model._blocks.9._expand_conv.weight", "model._blocks.9._bn0.weight", "model._blocks.9._bn0.bias", "model._blocks.9._bn0.running_mean", "model._blocks.9._bn0.running_var", "model._blocks.9._depthwise_conv.weight", "model._blocks.9._bn1.weight", "model._blocks.9._bn1.bias", "model._blocks.9._bn1.running_mean", "model._blocks.9._bn1.running_var", "model._blocks.9._se_reduce.weight", "model._blocks.9._se_reduce.bias", "model._blocks.9._se_expand.weight", "model._blocks.9._se_expand.bias", "model._blocks.9._project_conv.weight", "model._blocks.9._bn2.weight", "model._blocks.9._bn2.bias", "model._blocks.9._bn2.running_mean", "model._blocks.9._bn2.running_var", "model._blocks.10._expand_conv.weight", "model._blocks.10._bn0.weight", "model._blocks.10._bn0.bias", "model._blocks.10._bn0.running_mean", "model._blocks.10._bn0.running_var", "model._blocks.10._depthwise_conv.weight", "model._blocks.10._bn1.weight", "model._blocks.10._bn1.bias", "model._blocks.10._bn1.running_mean", "model._blocks.10._bn1.running_var", "model._blocks.10._se_reduce.weight", "model._blocks.10._se_reduce.bias", "model._blocks.10._se_expand.weight", "model._blocks.10._se_expand.bias", "model._blocks.10._project_conv.weight", "model._blocks.10._bn2.weight", "model._blocks.10._bn2.bias", "model._blocks.10._bn2.running_mean", "model._blocks.10._bn2.running_var", "model._blocks.11._expand_conv.weight", "model._blocks.11._bn0.weight", "model._blocks.11._bn0.bias", "model._blocks.11._bn0.running_mean", "model._blocks.11._bn0.running_var", "model._blocks.11._depthwise_conv.weight", "model._blocks.11._bn1.weight", "model._blocks.11._bn1.bias", "model._blocks.11._bn1.running_mean", "model._blocks.11._bn1.running_var", "model._blocks.11._se_reduce.weight", "model._blocks.11._se_reduce.bias", "model._blocks.11._se_expand.weight", "model._blocks.11._se_expand.bias", "model._blocks.11._project_conv.weight", "model._blocks.11._bn2.weight", "model._blocks.11._bn2.bias", "model._blocks.11._bn2.running_mean", "model._blocks.11._bn2.running_var", "model._blocks.12._expand_conv.weight", "model._blocks.12._bn0.weight", "model._blocks.12._bn0.bias", "model._blocks.12._bn0.running_mean", "model._blocks.12._bn0.running_var", "model._blocks.12._depthwise_conv.weight", "model._blocks.12._bn1.weight", "model._blocks.12._bn1.bias", "model._blocks.12._bn1.running_mean", "model._blocks.12._bn1.running_var", "model._blocks.12._se_reduce.weight", "model._blocks.12._se_reduce.bias", "model._blocks.12._se_expand.weight", "model._blocks.12._se_expand.bias", "model._blocks.12._project_conv.weight", "model._blocks.12._bn2.weight", "model._blocks.12._bn2.bias", "model._blocks.12._bn2.running_mean", "model._blocks.12._bn2.running_var", "model._blocks.13._expand_conv.weight", "model._blocks.13._bn0.weight", "model._blocks.13._bn0.bias", "model._blocks.13._bn0.running_mean", "model._blocks.13._bn0.running_var", "model._blocks.13._depthwise_conv.weight", "model._blocks.13._bn1.weight", "model._blocks.13._bn1.bias", "model._blocks.13._bn1.running_mean", "model._blocks.13._bn1.running_var", "model._blocks.13._se_reduce.weight", "model._blocks.13._se_reduce.bias", "model._blocks.13._se_expand.weight", "model._blocks.13._se_expand.bias", "model._blocks.13._project_conv.weight", "model._blocks.13._bn2.weight", "model._blocks.13._bn2.bias", "model._blocks.13._bn2.running_mean", "model._blocks.13._bn2.running_var", "model._blocks.14._expand_conv.weight", "model._blocks.14._bn0.weight", "model._blocks.14._bn0.bias", "model._blocks.14._bn0.running_mean", "model._blocks.14._bn0.running_var", "model._blocks.14._depthwise_conv.weight", "model._blocks.14._bn1.weight", "model._blocks.14._bn1.bias", "model._blocks.14._bn1.running_mean", "model._blocks.14._bn1.running_var", "model._blocks.14._se_reduce.weight", "model._blocks.14._se_reduce.bias", "model._blocks.14._se_expand.weight", "model._blocks.14._se_expand.bias", "model._blocks.14._project_conv.weight", "model._blocks.14._bn2.weight", "model._blocks.14._bn2.bias", "model._blocks.14._bn2.running_mean", "model._blocks.14._bn2.running_var", "model._blocks.15._expand_conv.weight", "model._blocks.15._bn0.weight", "model._blocks.15._bn0.bias", "model._blocks.15._bn0.running_mean", "model._blocks.15._bn0.running_var", "model._blocks.15._depthwise_conv.weight", "model._blocks.15._bn1.weight", "model._blocks.15._bn1.bias", "model._blocks.15._bn1.running_mean", "model._blocks.15._bn1.running_var", "model._blocks.15._se_reduce.weight", "model._blocks.15._se_reduce.bias", "model._blocks.15._se_expand.weight", "model._blocks.15._se_expand.bias", "model._blocks.15._project_conv.weight", "model._blocks.15._bn2.weight", "model._blocks.15._bn2.bias", "model._blocks.15._bn2.running_mean", "model._blocks.15._bn2.running_var", "model._conv_head.weight", "model._bn1.weight", "model._bn1.bias", "model._bn1.running_mean", "model._bn1.running_var", "model._fc.regular_outputs_layer.weight", "model._fc.regular_outputs_layer.bias". Unexpected key(s) in state_dict: "model.conv1.weight", "model.bn1.weight", "model.bn1.bias", "model.bn1.running_mean", "model.bn1.running_var", "model.bn1.num_batches_tracked", "model.layer1.0.conv1.weight", "model.layer1.0.bn1.weight", "model.layer1.0.bn1.bias", "model.layer1.0.bn1.running_mean", "model.layer1.0.bn1.running_var", "model.layer1.0.bn1.num_batches_tracked", "model.layer1.0.conv2.weight", "model.layer1.0.bn2.weight", "model.layer1.0.bn2.bias", "model.layer1.0.bn2.running_mean", "model.layer1.0.bn2.running_var", "model.layer1.0.bn2.num_batches_tracked", "model.layer1.1.conv1.weight", "model.layer1.1.bn1.weight", "model.layer1.1.bn1.bias", "model.layer1.1.bn1.running_mean", "model.layer1.1.bn1.running_var", "model.layer1.1.bn1.num_batches_tracked", "model.layer1.1.conv2.weight", "model.layer1.1.bn2.weight", "model.layer1.1.bn2.bias", "model.layer1.1.bn2.running_mean", "model.layer1.1.bn2.running_var", "model.layer1.1.bn2.num_batches_tracked", "model.layer1.2.conv1.weight", "model.layer1.2.bn1.weight", "model.layer1.2.bn1.bias", "model.layer1.2.bn1.running_mean", "model.layer1.2.bn1.running_var", "model.layer1.2.bn1.num_batches_tracked", "model.layer1.2.conv2.weight", "model.layer1.2.bn2.weight", "model.layer1.2.bn2.bias", "model.layer1.2.bn2.running_mean", "model.layer1.2.bn2.running_var", "model.layer1.2.bn2.num_batches_tracked", "model.layer2.0.conv1.weight", "model.layer2.0.bn1.weight", "model.layer2.0.bn1.bias", "model.layer2.0.bn1.running_mean", "model.layer2.0.bn1.running_var", "model.layer2.0.bn1.num_batches_tracked", "model.layer2.0.conv2.weight", "model.layer2.0.bn2.weight", "model.layer2.0.bn2.bias", "model.layer2.0.bn2.running_mean", "model.layer2.0.bn2.running_var", "model.layer2.0.bn2.num_batches_tracked", "model.layer2.0.downsample.0.weight", "model.layer2.0.downsample.1.weight", "model.layer2.0.downsample.1.bias", "model.layer2.0.downsample.1.running_mean", "model.layer2.0.downsample.1.running_var", "model.layer2.0.downsample.1.num_batches_tracked", "model.layer2.1.conv1.weight", "model.layer2.1.bn1.weight", "model.layer2.1.bn1.bias", "model.layer2.1.bn1.running_mean", "model.layer2.1.bn1.running_var", "model.layer2.1.bn1.num_batches_tracked", "model.layer2.1.conv2.weight", "model.layer2.1.bn2.weight", "model.layer2.1.bn2.bias", "model.layer2.1.bn2.running_mean", "model.layer2.1.bn2.running_var", "model.layer2.1.bn2.num_batches_tracked", "model.layer2.2.conv1.weight", "model.layer2.2.bn1.weight", "model.layer2.2.bn1.bias", "model.layer2.2.bn1.running_mean", "model.layer2.2.bn1.running_var", "model.layer2.2.bn1.num_batches_tracked", "model.layer2.2.conv2.weight", "model.layer2.2.bn2.weight", "model.layer2.2.bn2.bias", "model.layer2.2.bn2.running_mean", "model.layer2.2.bn2.running_var", "model.layer2.2.bn2.num_batches_tracked", "model.layer2.3.conv1.weight", "model.layer2.3.bn1.weight", "model.layer2.3.bn1.bias", "model.layer2.3.bn1.running_mean", "model.layer2.3.bn1.running_var", "model.layer2.3.bn1.num_batches_tracked", "model.layer2.3.conv2.weight", "model.layer2.3.bn2.weight", "model.layer2.3.bn2.bias", "model.layer2.3.bn2.running_mean", "model.layer2.3.bn2.running_var", "model.layer2.3.bn2.num_batches_tracked", "model.layer3.0.conv1.weight", "model.layer3.0.bn1.weight", "model.layer3.0.bn1.bias", "model.layer3.0.bn1.running_mean", "model.layer3.0.bn1.running_var", "model.layer3.0.bn1.num_batches_tracked", "model.layer3.0.conv2.weight", "model.layer3.0.bn2.weight", "model.layer3.0.bn2.bias", "model.layer3.0.bn2.running_mean", "model.layer3.0.bn2.running_var", "model.layer3.0.bn2.num_batches_tracked", "model.layer3.0.downsample.0.weight", "model.layer3.0.downsample.1.weight", "model.layer3.0.downsample.1.bias", "model.layer3.0.downsample.1.running_mean", "model.layer3.0.downsample.1.running_var", "model.layer3.0.downsample.1.num_batches_tracked", "model.layer3.1.conv1.weight", "model.layer3.1.bn1.weight", "model.layer3.1.bn1.bias", "model.layer3.1.bn1.running_mean", "model.layer3.1.bn1.running_var", "model.layer3.1.bn1.num_batches_tracked", "model.layer3.1.conv2.weight", "model.layer3.1.bn2.weight", "model.layer3.1.bn2.bias", "model.layer3.1.bn2.running_mean", "model.layer3.1.bn2.running_var", "model.layer3.1.bn2.num_batches_tracked", "model.layer3.2.conv1.weight", "model.layer3.2.bn1.weight", "model.layer3.2.bn1.bias", "model.layer3.2.bn1.running_mean", "model.layer3.2.bn1.running_var", "model.layer3.2.bn1.num_batches_tracked", "model.layer3.2.conv2.weight", "model.layer3.2.bn2.weight", "model.layer3.2.bn2.bias", "model.layer3.2.bn2.running_mean", "model.layer3.2.bn2.running_var", "model.layer3.2.bn2.num_batches_tracked", "model.layer3.3.conv1.weight", "model.layer3.3.bn1.weight", "model.layer3.3.bn1.bias", "model.layer3.3.bn1.running_mean", "model.layer3.3.bn1.running_var", "model.layer3.3.bn1.num_batches_tracked", "model.layer3.3.conv2.weight", "model.layer3.3.bn2.weight", "model.layer3.3.bn2.bias", "model.layer3.3.bn2.running_mean", "model.layer3.3.bn2.running_var", "model.layer3.3.bn2.num_batches_tracked", "model.layer3.4.conv1.weight", "model.layer3.4.bn1.weight", "model.layer3.4.bn1.bias", "model.layer3.4.bn1.running_mean", "model.layer3.4.bn1.running_var", "model.layer3.4.bn1.num_batches_tracked", "model.layer3.4.conv2.weight", "model.layer3.4.bn2.weight", "model.layer3.4.bn2.bias", "model.layer3.4.bn2.running_mean", "model.layer3.4.bn2.running_var", "model.layer3.4.bn2.num_batches_tracked", "model.layer3.5.conv1.weight", "model.layer3.5.bn1.weight", "model.layer3.5.bn1.bias", "model.layer3.5.bn1.running_mean", "model.layer3.5.bn1.running_var", "model.layer3.5.bn1.num_batches_tracked", "model.layer3.5.conv2.weight", "model.layer3.5.bn2.weight", "model.layer3.5.bn2.bias", "model.layer3.5.bn2.running_mean", "model.layer3.5.bn2.running_var", "model.layer3.5.bn2.num_batches_tracked", "model.layer4.0.conv1.weight", "model.layer4.0.bn1.weight", "model.layer4.0.bn1.bias", "model.layer4.0.bn1.running_mean", "model.layer4.0.bn1.running_var", "model.layer4.0.bn1.num_batches_tracked", "model.layer4.0.conv2.weight", "model.layer4.0.bn2.weight", "model.layer4.0.bn2.bias", "model.layer4.0.bn2.running_mean", "model.layer4.0.bn2.running_var", "model.layer4.0.bn2.num_batches_tracked", "model.layer4.0.downsample.0.weight", "model.layer4.0.downsample.1.weight", "model.layer4.0.downsample.1.bias", "model.layer4.0.downsample.1.running_mean", "model.layer4.0.downsample.1.running_var", "model.layer4.0.downsample.1.num_batches_tracked", "model.layer4.1.conv1.weight", "model.layer4.1.bn1.weight", "model.layer4.1.bn1.bias", "model.layer4.1.bn1.running_mean", "model.layer4.1.bn1.running_var", "model.layer4.1.bn1.num_batches_tracked", "model.layer4.1.conv2.weight", "model.layer4.1.bn2.weight", "model.layer4.1.bn2.bias", "model.layer4.1.bn2.running_mean", "model.layer4.1.bn2.running_var", "model.layer4.1.bn2.num_batches_tracked", "model.layer4.2.conv1.weight", "model.layer4.2.bn1.weight", "model.layer4.2.bn1.bias", "model.layer4.2.bn1.running_mean", "model.layer4.2.bn1.running_var", "model.layer4.2.bn1.num_batches_tracked", "model.layer4.2.conv2.weight", "model.layer4.2.bn2.weight", "model.layer4.2.bn2.bias", "model.layer4.2.bn2.running_mean", "model.layer4.2.bn2.running_var", "model.layer4.2.bn2.num_batches_tracked", "model.fc.regular_outputs_layer.weight", "model.fc.regular_outputs_layer.bias".

load_state_dict maybe cause problem

how can i solve this?

thank you

lucastabelini commented 3 years ago

Are you using the correct config file? i.e., the one used to train the tusimple experiment. Which PyTorch version are you using?

chenbokaix250 commented 3 years ago

Are you using the correct config file? i.e., the one used to train the tusimple experiment. Which PyTorch version are you using?

In config.yaml ,I only changed the path of the dataset。 I use the macOS 11.1 ,torch version 1.7.1 ,python version 3.8.7

complete output: python3 test.py --exp_name tusimple --cfg config.yaml --epoch 2695 [2021-03-10 08:48:50,049] [INFO] Experiment name: tusimple [2021-03-10 08:48:50,050] [INFO] Config:

Training settings

exps_dir: 'experiments/' iter_log_interval: 1 iter_time_window: 100 model_save_interval: 1 seed: 1 backup: model: name: PolyRegression parameters: num_outputs: 35 # (5 lanes) * (1 conf + 2 (upper & lower) + 4 poly coeffs) pretrained: true backbone: 'resnet50' pred_category: false curriculum_steps: [0, 0, 0, 0] loss_parameters: conf_weight: 1 lower_weight: 1 upper_weight: 1 cls_weight: 0 poly_weight: 300 batch_size: 16 epochs: 2695 optimizer: name: Adam parameters: lr: 3.0e-4 lr_scheduler: name: CosineAnnealingLR parameters: T_max: 385

Testing settings

test_parameters: conf_threshold: 0.5

Dataset settings

datasets: train: type: LaneDataset parameters: dataset: tusimple split: train img_size: [360, 640] normalize: true aug_chance: 0.9090909090909091 # 10/11 augmentations:

[2021-03-10 08:48:50,050] [INFO] Args: Namespace(batch_size=None, cfg='config.yaml', epoch=2695, exp_name='tusimple', view=False) total annos 358 Transforming annotations... Done. fatal: not a git repository (or any of the parent directories): .git warning: Not a git repository. Use --no-index to compare two paths outside a working tree usage: git diff --no-index []

Diff output format options -p, --patch generate patch -s, --no-patch suppress diff output -u generate patch -U, --unified[=] generate diffs with lines context -W, --function-context generate diffs with lines context --raw generate the diff in raw format --patch-with-raw synonym for '-p --raw' --patch-with-stat synonym for '-p --stat' --numstat machine friendly --stat --shortstat output only the last line of --stat -X, --dirstat[=<param1,param2>...] output the distribution of relative amount of changes for each sub-directory --cumulative synonym for --dirstat=cumulative --dirstat-by-file[=<param1,param2>...] synonym for --dirstat=files,param1,param2... --check warn if changes introduce conflict markers or whitespace errors --summary condensed summary such as creations, renames and mode changes --name-only show only names of changed files --name-status show only names and status of changed files --stat[=[,[,]]] generate diffstat --stat-width generate diffstat with a given width --stat-name-width generate diffstat with a given name width --stat-graph-width generate diffstat with a given graph width --stat-count generate diffstat with limited lines --compact-summary generate compact summary in diffstat --binary output a binary diff that can be applied --full-index show full pre- and post-image object names on the "index" lines --color[=] show colored diff --ws-error-highlight highlight whitespace errors in the 'context', 'old' or 'new' lines in the diff -z do not munge pathnames and use NULs as output field terminators in --raw or --numstat --abbrev[=] use digits to display SHA-1s --src-prefix show the given source prefix instead of "a/" --dst-prefix show the given destination prefix instead of "b/" --line-prefix prepend an additional prefix to every line of output --no-prefix do not show any source or destination prefix --inter-hunk-context show context between diff hunks up to the specified number of lines --output-indicator-new specify the character to indicate a new line instead of '+' --output-indicator-old specify the character to indicate an old line instead of '-' --output-indicator-context specify the character to indicate a context instead of ' '

Diff rename options -B, --break-rewrites[=[/]] break complete rewrite changes into pairs of delete and create -M, --find-renames[=] detect renames -D, --irreversible-delete omit the preimage for deletes -C, --find-copies[=] detect copies --find-copies-harder use unmodified files as source to find copies --no-renames disable rename detection --rename-empty use empty blobs as rename source --follow continue listing the history of a file beyond renames -l prevent rename/copy detection if the number of rename/copy targets exceeds given limit

Diff algorithm options --minimal produce the smallest possible diff -w, --ignore-all-space ignore whitespace when comparing lines -b, --ignore-space-change ignore changes in amount of whitespace --ignore-space-at-eol ignore changes in whitespace at EOL --ignore-cr-at-eol ignore carrier-return at the end of line --ignore-blank-lines ignore changes whose lines are all blank --indent-heuristic heuristic to shift diff hunk boundaries for easy reading --patience generate diff using the "patience diff" algorithm --histogram generate diff using the "histogram diff" algorithm --diff-algorithm choose a diff algorithm --anchored generate diff using the "anchored diff" algorithm --word-diff[=] show word diff, using to delimit changed words --word-diff-regex use to decide what a word is --color-words[=] equivalent to --word-diff=color --word-diff-regex= --color-moved[=] moved lines of code are colored differently --color-moved-ws how white spaces are ignored in --color-moved

Other diff options --relative[=] when run from subdir, exclude changes outside and show relative paths -a, --text treat all files as text -R swap two inputs, reverse the diff --exit-code exit with 1 if there were differences, 0 otherwise --quiet disable all output of the program --ext-diff allow an external diff helper to be executed --textconv run external text conversion filters when comparing binary files --ignore-submodules[=] ignore changes to submodules in the diff generation --submodule[=] specify how differences in submodules are shown --ita-invisible-in-index hide 'git add -N' entries from the index --ita-visible-in-index treat 'git add -N' entries as real in the index -S look for differences that change the number of occurrences of the specified string -G look for differences that change the number of occurrences of the specified regex --pickaxe-all show all changes in the changeset with -S or -G --pickaxe-regex treat in -S as extended POSIX regular expression -O control the order in which files appear in the output --find-object look for differences that change the number of occurrences of the specified object --diff-filter [(A|C|D|M|R|T|U|X|B)...[*]] select files by diff type --output Output to a specific file

[2021-03-10 08:48:50,777] [INFO] Code state: Git hash:


Git diff:


[2021-03-10 08:48:50,778] [INFO] Starting testing. [2021-03-10 08:48:51,355] [ERROR] Uncaught exception Traceback (most recent call last): File "test.py", line 162, in _, mean_loss = test(model, test_loader, evaluator, exp_root, cfg, epoch=test_epoch, view=args.view) File "test.py", line 26, in test model.load_state_dict(torch.load(weights_path, map_location='cpu')) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for PolyRegression: Missing key(s) in state_dict: "model.conv1.weight", "model.bn1.weight", "model.bn1.bias", "model.bn1.running_mean", "model.bn1.running_var", "model.layer1.0.conv1.weight", "model.layer1.0.bn1.weight", "model.layer1.0.bn1.bias", "model.layer1.0.bn1.running_mean", "model.layer1.0.bn1.running_var", "model.layer1.0.conv2.weight", "model.layer1.0.bn2.weight", "model.layer1.0.bn2.bias", "model.layer1.0.bn2.running_mean", "model.layer1.0.bn2.running_var", "model.layer1.0.conv3.weight", "model.layer1.0.bn3.weight", "model.layer1.0.bn3.bias", "model.layer1.0.bn3.running_mean", "model.layer1.0.bn3.running_var", "model.layer1.0.downsample.0.weight", "model.layer1.0.downsample.1.weight", "model.layer1.0.downsample.1.bias", "model.layer1.0.downsample.1.running_mean", "model.layer1.0.downsample.1.running_var", "model.layer1.1.conv1.weight", "model.layer1.1.bn1.weight", "model.layer1.1.bn1.bias", "model.layer1.1.bn1.running_mean", "model.layer1.1.bn1.running_var", "model.layer1.1.conv2.weight", "model.layer1.1.bn2.weight", "model.layer1.1.bn2.bias", "model.layer1.1.bn2.running_mean", "model.layer1.1.bn2.running_var", "model.layer1.1.conv3.weight", "model.layer1.1.bn3.weight", "model.layer1.1.bn3.bias", "model.layer1.1.bn3.running_mean", "model.layer1.1.bn3.running_var", "model.layer1.2.conv1.weight", "model.layer1.2.bn1.weight", "model.layer1.2.bn1.bias", "model.layer1.2.bn1.running_mean", "model.layer1.2.bn1.running_var", "model.layer1.2.conv2.weight", "model.layer1.2.bn2.weight", "model.layer1.2.bn2.bias", "model.layer1.2.bn2.running_mean", "model.layer1.2.bn2.running_var", "model.layer1.2.conv3.weight", "model.layer1.2.bn3.weight", "model.layer1.2.bn3.bias", "model.layer1.2.bn3.running_mean", "model.layer1.2.bn3.running_var", "model.layer2.0.conv1.weight", "model.layer2.0.bn1.weight", "model.layer2.0.bn1.bias", "model.layer2.0.bn1.running_mean", "model.layer2.0.bn1.running_var", "model.layer2.0.conv2.weight", "model.layer2.0.bn2.weight", "model.layer2.0.bn2.bias", "model.layer2.0.bn2.running_mean", "model.layer2.0.bn2.running_var", "model.layer2.0.conv3.weight", "model.layer2.0.bn3.weight", "model.layer2.0.bn3.bias", "model.layer2.0.bn3.running_mean", "model.layer2.0.bn3.running_var", "model.layer2.0.downsample.0.weight", "model.layer2.0.downsample.1.weight", "model.layer2.0.downsample.1.bias", "model.layer2.0.downsample.1.running_mean", "model.layer2.0.downsample.1.running_var", "model.layer2.1.conv1.weight", "model.layer2.1.bn1.weight", "model.layer2.1.bn1.bias", "model.layer2.1.bn1.running_mean", "model.layer2.1.bn1.running_var", "model.layer2.1.conv2.weight", "model.layer2.1.bn2.weight", "model.layer2.1.bn2.bias", "model.layer2.1.bn2.running_mean", "model.layer2.1.bn2.running_var", "model.layer2.1.conv3.weight", "model.layer2.1.bn3.weight", "model.layer2.1.bn3.bias", "model.layer2.1.bn3.running_mean", "model.layer2.1.bn3.running_var", "model.layer2.2.conv1.weight", "model.layer2.2.bn1.weight", "model.layer2.2.bn1.bias", "model.layer2.2.bn1.running_mean", "model.layer2.2.bn1.running_var", "model.layer2.2.conv2.weight", "model.layer2.2.bn2.weight", "model.layer2.2.bn2.bias", "model.layer2.2.bn2.running_mean", "model.layer2.2.bn2.running_var", "model.layer2.2.conv3.weight", "model.layer2.2.bn3.weight", "model.layer2.2.bn3.bias", "model.layer2.2.bn3.running_mean", "model.layer2.2.bn3.running_var", "model.layer2.3.conv1.weight", "model.layer2.3.bn1.weight", "model.layer2.3.bn1.bias", "model.layer2.3.bn1.running_mean", "model.layer2.3.bn1.running_var", "model.layer2.3.conv2.weight", "model.layer2.3.bn2.weight", "model.layer2.3.bn2.bias", "model.layer2.3.bn2.running_mean", "model.layer2.3.bn2.running_var", "model.layer2.3.conv3.weight", "model.layer2.3.bn3.weight", "model.layer2.3.bn3.bias", "model.layer2.3.bn3.running_mean", "model.layer2.3.bn3.running_var", "model.layer3.0.conv1.weight", "model.layer3.0.bn1.weight", "model.layer3.0.bn1.bias", "model.layer3.0.bn1.running_mean", "model.layer3.0.bn1.running_var", "model.layer3.0.conv2.weight", "model.layer3.0.bn2.weight", "model.layer3.0.bn2.bias", "model.layer3.0.bn2.running_mean", "model.layer3.0.bn2.running_var", "model.layer3.0.conv3.weight", "model.layer3.0.bn3.weight", "model.layer3.0.bn3.bias", "model.layer3.0.bn3.running_mean", "model.layer3.0.bn3.running_var", "model.layer3.0.downsample.0.weight", "model.layer3.0.downsample.1.weight", "model.layer3.0.downsample.1.bias", "model.layer3.0.downsample.1.running_mean", "model.layer3.0.downsample.1.running_var", "model.layer3.1.conv1.weight", "model.layer3.1.bn1.weight", "model.layer3.1.bn1.bias", "model.layer3.1.bn1.running_mean", "model.layer3.1.bn1.running_var", "model.layer3.1.conv2.weight", "model.layer3.1.bn2.weight", "model.layer3.1.bn2.bias", "model.layer3.1.bn2.running_mean", "model.layer3.1.bn2.running_var", "model.layer3.1.conv3.weight", "model.layer3.1.bn3.weight", "model.layer3.1.bn3.bias", "model.layer3.1.bn3.running_mean", "model.layer3.1.bn3.running_var", "model.layer3.2.conv1.weight", "model.layer3.2.bn1.weight", "model.layer3.2.bn1.bias", "model.layer3.2.bn1.running_mean", "model.layer3.2.bn1.running_var", "model.layer3.2.conv2.weight", "model.layer3.2.bn2.weight", "model.layer3.2.bn2.bias", "model.layer3.2.bn2.running_mean", "model.layer3.2.bn2.running_var", "model.layer3.2.conv3.weight", "model.layer3.2.bn3.weight", "model.layer3.2.bn3.bias", "model.layer3.2.bn3.running_mean", "model.layer3.2.bn3.running_var", "model.layer3.3.conv1.weight", "model.layer3.3.bn1.weight", "model.layer3.3.bn1.bias", "model.layer3.3.bn1.running_mean", "model.layer3.3.bn1.running_var", "model.layer3.3.conv2.weight", "model.layer3.3.bn2.weight", "model.layer3.3.bn2.bias", "model.layer3.3.bn2.running_mean", "model.layer3.3.bn2.running_var", "model.layer3.3.conv3.weight", "model.layer3.3.bn3.weight", "model.layer3.3.bn3.bias", "model.layer3.3.bn3.running_mean", "model.layer3.3.bn3.running_var", "model.layer3.4.conv1.weight", "model.layer3.4.bn1.weight", "model.layer3.4.bn1.bias", "model.layer3.4.bn1.running_mean", "model.layer3.4.bn1.running_var", "model.layer3.4.conv2.weight", "model.layer3.4.bn2.weight", "model.layer3.4.bn2.bias", "model.layer3.4.bn2.running_mean", "model.layer3.4.bn2.running_var", "model.layer3.4.conv3.weight", "model.layer3.4.bn3.weight", "model.layer3.4.bn3.bias", "model.layer3.4.bn3.running_mean", "model.layer3.4.bn3.running_var", "model.layer3.5.conv1.weight", "model.layer3.5.bn1.weight", "model.layer3.5.bn1.bias", "model.layer3.5.bn1.running_mean", "model.layer3.5.bn1.running_var", "model.layer3.5.conv2.weight", "model.layer3.5.bn2.weight", "model.layer3.5.bn2.bias", "model.layer3.5.bn2.running_mean", "model.layer3.5.bn2.running_var", "model.layer3.5.conv3.weight", "model.layer3.5.bn3.weight", "model.layer3.5.bn3.bias", "model.layer3.5.bn3.running_mean", "model.layer3.5.bn3.running_var", "model.layer4.0.conv1.weight", "model.layer4.0.bn1.weight", "model.layer4.0.bn1.bias", "model.layer4.0.bn1.running_mean", "model.layer4.0.bn1.running_var", "model.layer4.0.conv2.weight", "model.layer4.0.bn2.weight", "model.layer4.0.bn2.bias", "model.layer4.0.bn2.running_mean", "model.layer4.0.bn2.running_var", "model.layer4.0.conv3.weight", "model.layer4.0.bn3.weight", "model.layer4.0.bn3.bias", "model.layer4.0.bn3.running_mean", "model.layer4.0.bn3.running_var", "model.layer4.0.downsample.0.weight", "model.layer4.0.downsample.1.weight", "model.layer4.0.downsample.1.bias", "model.layer4.0.downsample.1.running_mean", "model.layer4.0.downsample.1.running_var", "model.layer4.1.conv1.weight", "model.layer4.1.bn1.weight", "model.layer4.1.bn1.bias", "model.layer4.1.bn1.running_mean", "model.layer4.1.bn1.running_var", "model.layer4.1.conv2.weight", "model.layer4.1.bn2.weight", "model.layer4.1.bn2.bias", "model.layer4.1.bn2.running_mean", "model.layer4.1.bn2.running_var", "model.layer4.1.conv3.weight", "model.layer4.1.bn3.weight", "model.layer4.1.bn3.bias", "model.layer4.1.bn3.running_mean", "model.layer4.1.bn3.running_var", "model.layer4.2.conv1.weight", "model.layer4.2.bn1.weight", "model.layer4.2.bn1.bias", "model.layer4.2.bn1.running_mean", "model.layer4.2.bn1.running_var", "model.layer4.2.conv2.weight", "model.layer4.2.bn2.weight", "model.layer4.2.bn2.bias", "model.layer4.2.bn2.running_mean", "model.layer4.2.bn2.running_var", "model.layer4.2.conv3.weight", "model.layer4.2.bn3.weight", "model.layer4.2.bn3.bias", "model.layer4.2.bn3.running_mean", "model.layer4.2.bn3.running_var", "model.fc.regular_outputs_layer.weight", "model.fc.regular_outputs_layer.bias". Unexpected key(s) in state_dict: "optimizer", "lr_scheduler", "epoch".


chenbokaix250 commented 3 years ago

Are you using the correct config file? i.e., the one used to train the tusimple experiment. Which PyTorch version are you using?

I found the reason. model.load_state_dict(torch.load(os.path.join(exp_root, "models", "model_{:03d}.pt".format(epoch)))['model'])   cause the error

change to model.load_state_dict(torch.load(os.path.join(exp_root, "models", "model_{:03d}.pt".format(epoch)))['model'],False) problem solved!

thank you for your works!