irfanICMLL / structure_knowledge_distillation

The official code for the paper 'Structured Knowledge Distillation for Semantic Segmentation'. (CVPR 2019 ORAL) and extension to other tasks.
BSD 2-Clause "Simplified" License
694 stars 104 forks source link

TypeError: add_() received an invalid combination of arguments - got (complex, Tensor) #44

Closed lmunan closed 3 years ago

lmunan commented 3 years ago

I have git checkout d1ec858, but after completing 40000 steps, then error happened.

INFO     step:40000 G_lr:0.000000 G_loss:113.60744(mc:0.13908 pixelwise:113.43781 pairwise:0.00256) D_lr:0.000000 D_loss:0.01955
Traceback (most recent call last):
  File "train_and_eval.py", line 25, in <module>
    model.optimize_parameters()
  File "/home/shine/文档/structure_knowledge_distillation/networks/kd_model.py", line 171, in optimize_parameters
    self.G_solver.step()
  File "/home/shine/anaconda3/envs/wangnan/lib/python3.6/site-packages/torch/optim/sgd.py", line 107, in step
    p.data.add_(-group['lr'], d_p)
TypeError: add_() received an invalid combination of arguments - got (complex, Tensor), but expected one of:
 * (Tensor other, Number alpha)
      didn't match because some of the arguments have invalid types: (complex, Tensor)
 * (Number other, Number alpha)
      didn't match because some of the arguments have invalid types: (complex, Tensor)
irfanICMLL commented 3 years ago

I did not meet this error before. Could you please check if the checkpoint has been saved? You can try to eval the checkpoint, it should have the similar performance as ours.

lmunan commented 3 years ago

Although there was an error in the end, the final evaluation had a good performance. Now I want to use the official resnet18-imagenet for pretrain, and then I encountered the same problem #20 . I replace conv1-3x3 with 7x7 to load the pretrain. Then the following problem appeared.

Traceback (most recent call last):
  File "train_and_eval.py", line 19, in <module>
    model = NetModel(args)
  File "/home/mist/structure_knowledge_distillation/networks/kd_model.py", line 58, in __init__
    load_S_model(args, student, False)
  File "/home/mist/structure_knowledge_distillation/utils/utils.py", line 103, in load_S_model
    model.load_state_dict(new_params)
  File "/home/mist/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 719, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for ResNet:
        size mismatch for layer1.0.conv1.weight: copying a param of torch.Size([64, 128, 3, 3]) from checkpoint, where the shape is torch.Size([64, 64, 3, 3]) in current model.
wl082013 commented 3 years ago

Although there was an error in the end, the final evaluation had a good performance. Now I want to use the official resnet18-imagenet for pretrain, and then I encountered the same problem #20 . I replace conv1-3x3 with 7x7 to load the pretrain. Then the following problem appeared.

Traceback (most recent call last):
  File "train_and_eval.py", line 19, in <module>
    model = NetModel(args)
  File "/home/mist/structure_knowledge_distillation/networks/kd_model.py", line 58, in __init__
    load_S_model(args, student, False)
  File "/home/mist/structure_knowledge_distillation/utils/utils.py", line 103, in load_S_model
    model.load_state_dict(new_params)
  File "/home/mist/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 719, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for ResNet:
        size mismatch for layer1.0.conv1.weight: copying a param of torch.Size([64, 128, 3, 3]) from checkpoint, where the shape is torch.Size([64, 64, 3, 3]) in current model.

Me too have the same problem with you. Did you solve it?

wl082013 commented 3 years ago

I have git checkout d1ec858, but after completing 40000 steps, then error happened.

INFO     step:40000 G_lr:0.000000 G_loss:113.60744(mc:0.13908 pixelwise:113.43781 pairwise:0.00256) D_lr:0.000000 D_loss:0.01955
Traceback (most recent call last):
  File "train_and_eval.py", line 25, in <module>
    model.optimize_parameters()
  File "/home/shine/文档/structure_knowledge_distillation/networks/kd_model.py", line 171, in optimize_parameters
    self.G_solver.step()
  File "/home/shine/anaconda3/envs/wangnan/lib/python3.6/site-packages/torch/optim/sgd.py", line 107, in step
    p.data.add_(-group['lr'], d_p)
TypeError: add_() received an invalid combination of arguments - got (complex, Tensor), but expected one of:
 * (Tensor other, Number alpha)
      didn't match because some of the arguments have invalid types: (complex, Tensor)
 * (Number other, Number alpha)
      didn't match because some of the arguments have invalid types: (complex, Tensor)

Exactly the same problem with you

lmunan commented 3 years ago

Although there was an error in the end, the final evaluation had a good performance. Now I want to use the official resnet18-imagenet for pretrain, and then I encountered the same problem #20 . I replace conv1-3x3 with 7x7 to load the pretrain. Then the following problem appeared.

Traceback (most recent call last):
  File "train_and_eval.py", line 19, in <module>
    model = NetModel(args)
  File "/home/mist/structure_knowledge_distillation/networks/kd_model.py", line 58, in __init__
    load_S_model(args, student, False)
  File "/home/mist/structure_knowledge_distillation/utils/utils.py", line 103, in load_S_model
    model.load_state_dict(new_params)
  File "/home/mist/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 719, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for ResNet:
        size mismatch for layer1.0.conv1.weight: copying a param of torch.Size([64, 128, 3, 3]) from checkpoint, where the shape is torch.Size([64, 64, 3, 3]) in current model.

Me too have the same problem with you. Did you solve it?

I did not use the official pre-trained model, I downloaded the pre-trained model from here. No error was reported, but the final result has not been seen yet.