wbw520 / NoisyLSTM

Noisy-LSTM: Improving Temporal Awareness for Video Semantic Segmentation
24 stars 1 forks source link

error in step 2 training #3

Open fjremnav opened 3 years ago

fjremnav commented 3 years ago

I am able to train based model. However, it encounters an error in step2 training below when it tries to load PSPNet.pt from step 1.

python train.py --model_name PSPNet --lstm True --use_pre True --noise False --data_dir cityscapes/

Traceback (most recent call last): File "train.py", line 108, in train(args) File "train.py", line 78, in train init_model = load_model(args) File "/home/user/4TB/user1/NoisyLSTM/tools/tool.py", line 285, in load_model pre_model.load_state_dict(new_state_dict, strict=True) File "/home/user1/anaconda3/envs/remnav/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1045, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for PspNet: Missing key(s) in state_dict: "layer0.0.weight", "layer0.1.weight", "layer0.1.bias", "layer0.1.running_mean", "layer0.1.running_var", "layer1.0.conv1. weight", "layer1.0.bn1.weight", "layer1.0.bn1.bias", "layer1.0.bn1.running_mean", "layer1.0.bn1.running_var", "layer1.0.conv2.weight", "layer1.0.bn2.weight", "layer1.0.bn2.bias", "layer1.0.bn2.running_mean", "layer1.0.bn2.running_var", "layer1.0.conv3.weight", "layer1.0.bn3.weight", "layer1.0.bn3.bias", "layer1.0.b ,,,,,

Any idea why this happens

Thanks,

wbw520 commented 3 years ago

Hi,

I think this is a bug by multi-gpu setting. I run the code only with multi-gpu. If you pre-train the model in one gpu with "--multi_gpu False", one solution is to change the code in tools/tool.py line 283 "name = k[7:]" to "name = k" . (The former one is used to remove the prefix of saved model parameters caused by nn.DataParallel. It not happens in single gpu.) Hope it can help you.

good day,

fjremnav commented 3 years ago

It works with your answer. One more question, how do I test with my own images or videos with test.py?

Thanks,

wbw520 commented 3 years ago

The test.py is actually used for evaluate mAP. If you want to try to visulaize your own image or video, please used the tools/factory.py. I have updated a new version for it. (But it write for cityscapes (take city of munster as a sample), you may have to change the root setting for you own data)

fjremnav commented 3 years ago

I read factory.py and see groundtruth in the code. Unfortunately, my images/videos do not have ground truths. Could I just comment out groundtruth related codes?

Thanks,

wbw520 commented 3 years ago

It is totally ok.

fjremnav commented 3 years ago

My trained model performs quite bad. Do you mind sharing your trained model (PSPNet and PSPNet_Llst_noise) which I can use it as a refence?

Thanks,

wbw520 commented 3 years ago

You can download the weight in the following link https://drive.google.com/file/d/1R2yPUdGUudr3ADTNdsrUgeXye0QGfdwN/view?usp=sharing