BADBADBADBOY / pytorchOCR

基于pytorch的ocr算法库,包括 psenet, pan, dbnet, sast , crnn
676 stars 133 forks source link

KeyError: 'module.backbone.conv1.weight' #34

Closed seekingdeep closed 3 years ago

seekingdeep commented 3 years ago

When running inferencing, i am getting error:

(pocr) home@home-lnx:~/programs/pytorchOCR$ python ./tools/det_infer.py --config ./config/det_DB_mobilev3.yaml --model_path ./checkpoint/ag_DB_bb_mobilenet_v3_small_he_DB_Head_bs_16_ep_1200/DB_best.pth.tar --img_path ./input  --result_save_path ./result
Traceback (most recent call last):
  File "./ptocr/utils/util_function.py", line 197, in load_model
    model.load_state_dict(model_dict)
  File "/home/home/anaconda3/envs/pocr/lib/python3.6/site-packages/torch/nn/modules/module.py", line 830, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for DetModel:
    Missing key(s) in state_dict: "head.smooth1.conv.weight", "head.smooth1.bn.weight", "head.smooth1.bn.bias", "head.smooth1.bn.running_mean", "head.smooth1.bn.running_var", "head.smooth2.conv.weight", "head.smooth2.bn.weight", "head.smooth2.bn.bias", "head.smooth2.bn.running_mean", "head.smooth2.bn.running_var", "head.smooth3.conv.weight", "head.smooth3.bn.weight", "head.smooth3.bn.bias", "head.smooth3.bn.running_mean", "head.smooth3.bn.running_var", "head.smooth4.conv.weight", "head.smooth4.bn.weight", "head.smooth4.bn.bias", "head.smooth4.bn.running_mean", "head.smooth4.bn.running_var", "head.out.conv.weight", "head.out.bn.weight", "head.out.bn.bias", "head.out.bn.running_mean", "head.out.bn.running_var", "head.fpem_1.up_block1.dw_conv.weight", "head.fpem_1.up_block1.point_conv.weight", "head.fpem_1.up_block1.point_bn.weight", "head.fpem_1.up_block1.point_bn.bias", "head.fpem_1.up_block1.point_bn.running_mean", "head.fpem_1.up_block1.point_bn.running_var", "head.fpem_1.up_block2.dw_conv.weight", "head.fpem_1.up_block2.point_conv.weight", "head.fpem_1.up_block2.point_bn.weight", "head.fpem_1.up_block2.point_bn.bias", "head.fpem_1.up_block2.point_bn.running_mean", "head.fpem_1.up_block2.point_bn.running_var", "head.fpem_1.up_block3.dw_conv.weight", "head.fpem_1.up_block3.point_conv.weight", "head.fpem_1.up_block3.point_bn.weight", "head.fpem_1.up_block3.point_bn.bias", "head.fpem_1.up_block3.point_bn.running_mean", "head.fpem_1.up_block3.point_bn.running_var", "head.fpem_1.down_block1.dw_conv.weight", "head.fpem_1.down_block1.point_conv.weight", "head.fpem_1.down_block1.point_bn.weight", "head.fpem_1.down_block1.point_bn.bias", "head.fpem_1.down_block1.point_bn.running_mean", "head.fpem_1.down_block1.point_bn.running_var", "head.fpem_1.down_block2.dw_conv.weight", "head.fpem_1.down_block2.point_conv.weight", "head.fpem_1.down_block2.point_bn.weight", "head.fpem_1.down_block2.point_bn.bias", "head.fpem_1.down_block2.point_bn.running_mean", "head.fpem_1.down_block2.point_bn.running_var", "head.fpem_1.down_block3.dw_conv.weight", "head.fpem_1.down_block3.point_conv.weight", "head.fpem_1.down_block3.point_bn.weight", "head.fpem_1.down_block3.point_bn.bias", "head.fpem_1.down_block3.point_bn.running_mean", "head.fpem_1.down_block3.point_bn.running_var", "head.fpem_2.up_block1.dw_conv.weight", "head.fpem_2.up_block1.point_conv.weight", "head.fpem_2.up_block1.point_bn.weight", "head.fpem_2.up_block1.point_bn.bias", "head.fpem_2.up_block1.point_bn.running_mean", "head.fpem_2.up_block1.point_bn.running_var", "head.fpem_2.up_block2.dw_conv.weight", "head.fpem_2.up_block2.point_conv.weight", "head.fpem_2.up_block2.point_bn.weight", "head.fpem_2.up_block2.point_bn.bias", "head.fpem_2.up_block2.point_bn.running_mean", "head.fpem_2.up_block2.point_bn.running_var", "head.fpem_2.up_block3.dw_conv.weight", "head.fpem_2.up_block3.point_conv.weight", "head.fpem_2.up_block3.point_bn.weight", "head.fpem_2.up_block3.point_bn.bias", "head.fpem_2.up_block3.point_bn.running_mean", "head.fpem_2.up_block3.point_bn.running_var", "head.fpem_2.down_block1.dw_conv.weight", "head.fpem_2.down_block1.point_conv.weight", "head.fpem_2.down_block1.point_bn.weight", "head.fpem_2.down_block1.point_bn.bias", "head.fpem_2.down_block1.point_bn.running_mean", "head.fpem_2.down_block1.point_bn.running_var", "head.fpem_2.down_block2.dw_conv.weight", "head.fpem_2.down_block2.point_conv.weight", "head.fpem_2.down_block2.point_bn.weight", "head.fpem_2.down_block2.point_bn.bias", "head.fpem_2.down_block2.point_bn.running_mean", "head.fpem_2.down_block2.point_bn.running_var", "head.fpem_2.down_block3.dw_conv.weight", "head.fpem_2.down_block3.point_conv.weight", "head.fpem_2.down_block3.point_bn.weight", "head.fpem_2.down_block3.point_bn.bias", "head.fpem_2.down_block3.point_bn.running_mean", "head.fpem_2.down_block3.point_bn.running_var". 
    Unexpected key(s) in state_dict: "head.in5.conv.weight", "head.in5.bn.weight", "head.in5.bn.bias", "head.in5.bn.running_mean", "head.in5.bn.running_var", "head.in5.bn.num_batches_tracked", "head.in4.conv.weight", "head.in4.bn.weight", "head.in4.bn.bias", "head.in4.bn.running_mean", "head.in4.bn.running_var", "head.in4.bn.num_batches_tracked", "head.in3.conv.weight", "head.in3.bn.weight", "head.in3.bn.bias", "head.in3.bn.running_mean", "head.in3.bn.running_var", "head.in3.bn.num_batches_tracked", "head.in2.conv.weight", "head.in2.bn.weight", "head.in2.bn.bias", "head.in2.bn.running_mean", "head.in2.bn.running_var", "head.in2.bn.num_batches_tracked", "head.out5.conv.weight", "head.out5.bn.weight", "head.out5.bn.bias", "head.out5.bn.running_mean", "head.out5.bn.running_var", "head.out5.bn.num_batches_tracked", "head.out4.conv.weight", "head.out4.bn.weight", "head.out4.bn.bias", "head.out4.bn.running_mean", "head.out4.bn.running_var", "head.out4.bn.num_batches_tracked", "head.out3.conv.weight", "head.out3.bn.weight", "head.out3.bn.bias", "head.out3.bn.running_mean", "head.out3.bn.running_var", "head.out3.bn.num_batches_tracked", "head.out2.conv.weight", "head.out2.bn.weight", "head.out2.bn.bias", "head.out2.bn.running_mean", "head.out2.bn.running_var", "head.out2.bn.num_batches_tracked". 

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./tools/det_infer.py", line 192, in <module>
    InferImage(config)
  File "./tools/det_infer.py", line 151, in InferImage
    test_bin = TestProgram(config)
  File "./tools/det_infer.py", line 85, in __init__
    model = load_model(model,config['infer']['model_path'])
  File "./ptocr/utils/util_function.py", line 202, in load_model
    state[key] = model_dict['module.' + key]
KeyError: 'module.backbone.conv1.weight'
BADBADBADBOY commented 3 years ago

When running inferencing, i am getting error:

(pocr) home@home-lnx:~/programs/pytorchOCR$ python ./tools/det_infer.py --config ./config/det_DB_mobilev3.yaml --model_path ./checkpoint/ag_DB_bb_mobilenet_v3_small_he_DB_Head_bs_16_ep_1200/DB_best.pth.tar --img_path ./input  --result_save_path ./result
Traceback (most recent call last):
  File "./ptocr/utils/util_function.py", line 197, in load_model
    model.load_state_dict(model_dict)
  File "/home/home/anaconda3/envs/pocr/lib/python3.6/site-packages/torch/nn/modules/module.py", line 830, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for DetModel:
  Missing key(s) in state_dict: "head.smooth1.conv.weight", "head.smooth1.bn.weight", "head.smooth1.bn.bias", "head.smooth1.bn.running_mean", "head.smooth1.bn.running_var", "head.smooth2.conv.weight", "head.smooth2.bn.weight", "head.smooth2.bn.bias", "head.smooth2.bn.running_mean", "head.smooth2.bn.running_var", "head.smooth3.conv.weight", "head.smooth3.bn.weight", "head.smooth3.bn.bias", "head.smooth3.bn.running_mean", "head.smooth3.bn.running_var", "head.smooth4.conv.weight", "head.smooth4.bn.weight", "head.smooth4.bn.bias", "head.smooth4.bn.running_mean", "head.smooth4.bn.running_var", "head.out.conv.weight", "head.out.bn.weight", "head.out.bn.bias", "head.out.bn.running_mean", "head.out.bn.running_var", "head.fpem_1.up_block1.dw_conv.weight", "head.fpem_1.up_block1.point_conv.weight", "head.fpem_1.up_block1.point_bn.weight", "head.fpem_1.up_block1.point_bn.bias", "head.fpem_1.up_block1.point_bn.running_mean", "head.fpem_1.up_block1.point_bn.running_var", "head.fpem_1.up_block2.dw_conv.weight", "head.fpem_1.up_block2.point_conv.weight", "head.fpem_1.up_block2.point_bn.weight", "head.fpem_1.up_block2.point_bn.bias", "head.fpem_1.up_block2.point_bn.running_mean", "head.fpem_1.up_block2.point_bn.running_var", "head.fpem_1.up_block3.dw_conv.weight", "head.fpem_1.up_block3.point_conv.weight", "head.fpem_1.up_block3.point_bn.weight", "head.fpem_1.up_block3.point_bn.bias", "head.fpem_1.up_block3.point_bn.running_mean", "head.fpem_1.up_block3.point_bn.running_var", "head.fpem_1.down_block1.dw_conv.weight", "head.fpem_1.down_block1.point_conv.weight", "head.fpem_1.down_block1.point_bn.weight", "head.fpem_1.down_block1.point_bn.bias", "head.fpem_1.down_block1.point_bn.running_mean", "head.fpem_1.down_block1.point_bn.running_var", "head.fpem_1.down_block2.dw_conv.weight", "head.fpem_1.down_block2.point_conv.weight", "head.fpem_1.down_block2.point_bn.weight", "head.fpem_1.down_block2.point_bn.bias", "head.fpem_1.down_block2.point_bn.running_mean", "head.fpem_1.down_block2.point_bn.running_var", "head.fpem_1.down_block3.dw_conv.weight", "head.fpem_1.down_block3.point_conv.weight", "head.fpem_1.down_block3.point_bn.weight", "head.fpem_1.down_block3.point_bn.bias", "head.fpem_1.down_block3.point_bn.running_mean", "head.fpem_1.down_block3.point_bn.running_var", "head.fpem_2.up_block1.dw_conv.weight", "head.fpem_2.up_block1.point_conv.weight", "head.fpem_2.up_block1.point_bn.weight", "head.fpem_2.up_block1.point_bn.bias", "head.fpem_2.up_block1.point_bn.running_mean", "head.fpem_2.up_block1.point_bn.running_var", "head.fpem_2.up_block2.dw_conv.weight", "head.fpem_2.up_block2.point_conv.weight", "head.fpem_2.up_block2.point_bn.weight", "head.fpem_2.up_block2.point_bn.bias", "head.fpem_2.up_block2.point_bn.running_mean", "head.fpem_2.up_block2.point_bn.running_var", "head.fpem_2.up_block3.dw_conv.weight", "head.fpem_2.up_block3.point_conv.weight", "head.fpem_2.up_block3.point_bn.weight", "head.fpem_2.up_block3.point_bn.bias", "head.fpem_2.up_block3.point_bn.running_mean", "head.fpem_2.up_block3.point_bn.running_var", "head.fpem_2.down_block1.dw_conv.weight", "head.fpem_2.down_block1.point_conv.weight", "head.fpem_2.down_block1.point_bn.weight", "head.fpem_2.down_block1.point_bn.bias", "head.fpem_2.down_block1.point_bn.running_mean", "head.fpem_2.down_block1.point_bn.running_var", "head.fpem_2.down_block2.dw_conv.weight", "head.fpem_2.down_block2.point_conv.weight", "head.fpem_2.down_block2.point_bn.weight", "head.fpem_2.down_block2.point_bn.bias", "head.fpem_2.down_block2.point_bn.running_mean", "head.fpem_2.down_block2.point_bn.running_var", "head.fpem_2.down_block3.dw_conv.weight", "head.fpem_2.down_block3.point_conv.weight", "head.fpem_2.down_block3.point_bn.weight", "head.fpem_2.down_block3.point_bn.bias", "head.fpem_2.down_block3.point_bn.running_mean", "head.fpem_2.down_block3.point_bn.running_var". 
  Unexpected key(s) in state_dict: "head.in5.conv.weight", "head.in5.bn.weight", "head.in5.bn.bias", "head.in5.bn.running_mean", "head.in5.bn.running_var", "head.in5.bn.num_batches_tracked", "head.in4.conv.weight", "head.in4.bn.weight", "head.in4.bn.bias", "head.in4.bn.running_mean", "head.in4.bn.running_var", "head.in4.bn.num_batches_tracked", "head.in3.conv.weight", "head.in3.bn.weight", "head.in3.bn.bias", "head.in3.bn.running_mean", "head.in3.bn.running_var", "head.in3.bn.num_batches_tracked", "head.in2.conv.weight", "head.in2.bn.weight", "head.in2.bn.bias", "head.in2.bn.running_mean", "head.in2.bn.running_var", "head.in2.bn.num_batches_tracked", "head.out5.conv.weight", "head.out5.bn.weight", "head.out5.bn.bias", "head.out5.bn.running_mean", "head.out5.bn.running_var", "head.out5.bn.num_batches_tracked", "head.out4.conv.weight", "head.out4.bn.weight", "head.out4.bn.bias", "head.out4.bn.running_mean", "head.out4.bn.running_var", "head.out4.bn.num_batches_tracked", "head.out3.conv.weight", "head.out3.bn.weight", "head.out3.bn.bias", "head.out3.bn.running_mean", "head.out3.bn.running_var", "head.out3.bn.num_batches_tracked", "head.out2.conv.weight", "head.out2.bn.weight", "head.out2.bn.bias", "head.out2.bn.running_mean", "head.out2.bn.running_var", "head.out2.bn.num_batches_tracked". 

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./tools/det_infer.py", line 192, in <module>
    InferImage(config)
  File "./tools/det_infer.py", line 151, in InferImage
    test_bin = TestProgram(config)
  File "./tools/det_infer.py", line 85, in __init__
    model = load_model(model,config['infer']['model_path'])
  File "./ptocr/utils/util_function.py", line 202, in load_model
    state[key] = model_dict['module.' + key]
KeyError: 'module.backbone.conv1.weight'

你这是做什么报的错,你这样直接贴个错误,我也不知道为啥

seekingdeep commented 3 years ago

i am using the latest version of this repository, i ran the text detection command: python ./tools/det_infer.py --config ./config/det_DB_mobilev3.yaml --model_path ./checkpoint/ag_DB_bb_mobilenet_v3_small_he_DB_Head_bs_16_ep_1200/DB_best.pth.tar --img_path ./input --result_save_path ./result

then i got an error of:

KeyError: 'module.backbone.conv1.weight'
seekingdeep commented 3 years ago

@BADBADBADBOY post the full requirements:

pip freeze > requirements.txt
BADBADBADBOY commented 3 years ago

i am using the latest version of this repository, i ran the text detection command: python ./tools/det_infer.py --config ./config/det_DB_mobilev3.yaml --model_path ./checkpoint/ag_DB_bb_mobilenet_v3_small_he_DB_Head_bs_16_ep_1200/DB_best.pth.tar --img_path ./input --result_save_path ./result

then i got an error of:

KeyError: 'module.backbone.conv1.weight'

you need to change the head in yaml ,use function: ptocr.model.head.det_DBHead,DB_Head ,this model is trained use the DBHead

seekingdeep commented 3 years ago

After changing the yaml function, now i am getting:

(pocr) home@home-lnx:~/programs/pytorchOCR$ python ./tools/det_infer.py --config ./config/det_DB_mobilev3.yaml --model_path ./checkpoint/ag_DB_bb_mobilenet_v3_small_he_DB_Head_bs_16_ep_1200/DB_best.pth.tar --img_path ./input  --result_save_path ./result
make: Entering directory '/home/home/programs/pytorchOCR/ptocr/postprocess/dbprocess'
make: 'cppdbprocess.so' is up to date.
make: Leaving directory '/home/home/programs/pytorchOCR/ptocr/postprocess/dbprocess'
Traceback (most recent call last):
  File "./tools/det_infer.py", line 192, in <module>
    InferImage(config)
  File "./tools/det_infer.py", line 157, in InferImage
    batch_imgs,batch_img_names = get_batch_files(path,files,batch_size=config['testload']['batch_size'])
  File "./tools/det_infer.py", line 40, in get_batch_files
    num = len(img_files)//batch_size
TypeError: unsupported operand type(s) for //: 'int' and 'NoneType'
BADBADBADBOY commented 3 years ago

After changing the yaml function, now i am getting:

(pocr) home@home-lnx:~/programs/pytorchOCR$ python ./tools/det_infer.py --config ./config/det_DB_mobilev3.yaml --model_path ./checkpoint/ag_DB_bb_mobilenet_v3_small_he_DB_Head_bs_16_ep_1200/DB_best.pth.tar --img_path ./input  --result_save_path ./result
make: Entering directory '/home/home/programs/pytorchOCR/ptocr/postprocess/dbprocess'
make: 'cppdbprocess.so' is up to date.
make: Leaving directory '/home/home/programs/pytorchOCR/ptocr/postprocess/dbprocess'
Traceback (most recent call last):
  File "./tools/det_infer.py", line 192, in <module>
    InferImage(config)
  File "./tools/det_infer.py", line 157, in InferImage
    batch_imgs,batch_img_names = get_batch_files(path,files,batch_size=config['testload']['batch_size'])
  File "./tools/det_infer.py", line 40, in get_batch_files
    num = len(img_files)//batch_size
TypeError: unsupported operand type(s) for //: 'int' and 'NoneType'

you need add --batch_size 1

seekingdeep commented 3 years ago

the result_img and result_txt are empty:

(pocr) home@home-lnx:~/programs/pytorchOCR$ python ./tools/det_infer.py --config ./config/det_DB_mobilev3.yaml --model_path ./checkpoint/ag_DB_bb_mobilenet_v3_small_he_DB_Head_bs_16_ep_1200/DB_best.pth.tar --img_path ./input  --result_save_path ./result --batch_size 1
make: Entering directory '/home/home/programs/pytorchOCR/ptocr/postprocess/dbprocess'
make: 'cppdbprocess.so' is up to date.
make: Leaving directory '/home/home/programs/pytorchOCR/ptocr/postprocess/dbprocess'
  0%|                                                                                                                                                                           | 0/1 [00:00<?, ?it/s]Traceback (most recent call last):
  File "./tools/det_infer.py", line 192, in <module>
    InferImage(config)
  File "./tools/det_infer.py", line 162, in InferImage
    InferOneImg(test_bin, batch_imgs[i],batch_img_names[i], save_path)
  File "./tools/det_infer.py", line 135, in InferOneImg
    bbox_batch, score_batch = bin.infer_img(img)
  File "./tools/det_infer.py", line 97, in infer_img
    img,scales = get_img(ori_imgs,self.config)
  File "./tools/det_infer.py", line 62, in get_img
    img,scale = resize_image_batch(ori_img,config['base']['algorithm'],config['testload']['test_size'],add_padding = config['testload']['add_padding'])
  File "./ptocr/utils/util_function.py", line 63, in resize_image_batch
    new_width = int(math.ceil(new_height / height * width / stride) * stride)
TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<0
BADBADBADBOY commented 3 years ago

the result_img and result_txt are empty:

(pocr) home@home-lnx:~/programs/pytorchOCR$ python ./tools/det_infer.py --config ./config/det_DB_mobilev3.yaml --model_path ./checkpoint/ag_DB_bb_mobilenet_v3_small_he_DB_Head_bs_16_ep_1200/DB_best.pth.tar --img_path ./input  --result_save_path ./result --batch_size 1
make: Entering directory '/home/home/programs/pytorchOCR/ptocr/postprocess/dbprocess'
make: 'cppdbprocess.so' is up to date.
make: Leaving directory '/home/home/programs/pytorchOCR/ptocr/postprocess/dbprocess'
  0%|                                                                                                                                                                           | 0/1 [00:00<?, ?it/s]Traceback (most recent call last):
  File "./tools/det_infer.py", line 192, in <module>
    InferImage(config)
  File "./tools/det_infer.py", line 162, in InferImage
    InferOneImg(test_bin, batch_imgs[i],batch_img_names[i], save_path)
  File "./tools/det_infer.py", line 135, in InferOneImg
    bbox_batch, score_batch = bin.infer_img(img)
  File "./tools/det_infer.py", line 97, in infer_img
    img,scales = get_img(ori_imgs,self.config)
  File "./tools/det_infer.py", line 62, in get_img
    img,scale = resize_image_batch(ori_img,config['base']['algorithm'],config['testload']['test_size'],add_padding = config['testload']['add_padding'])
  File "./ptocr/utils/util_function.py", line 63, in resize_image_batch
    new_width = int(math.ceil(new_height / height * width / stride) * stride)
TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<0

--max_size 1536

seekingdeep commented 3 years ago

Thanks!!!! now it's working