hellozhuo / pidinet

Code for the ICCV 2021 paper "Pixel Difference Networks for Efficient Edge Detection" (Oral).
Other
445 stars 69 forks source link

Generating edge maps not working on Mac #27

Open krummrey opened 2 years ago

krummrey commented 2 years ago

Hi, I#m trying to get a bunch of images outlined and have tried to run the following command python main.py --model pidinet_converted --config carv4 --sa --dil -j 4 --gpu -1 --savedir img_out/ --datadir img_in/ --dataset pidinet --evaluate trained_models/table7_pidinet.pth --evaluate-converted

It starts and loads the model, but then aborts: Namespace(savedir='img_out/', datadir='img_in/', only_bsds=False, ablation=False, dataset=['Custom'], model='pidinet_converted', sa=True, dil=True, config='carv4', seed=1645557333, gpu='-1', checkinfo=False, epochs=20, iter_size=24, lr=0.005, lr_type='multistep', lr_steps=None, opt='adam', wd=0.0001, workers=4, eta=0.3, lmbda=1.1, resume=False, print_freq=10, save_freq=1, evaluate='trained_models/table7_pidinet.pth', evaluate_converted=True, use_cuda=False) {'layer0': 'cd', 'layer1': 'ad', 'layer2': 'rd', 'layer3': 'cv', 'layer4': 'cd', 'layer5': 'ad', 'layer6': 'rd', 'layer7': 'cv', 'layer8': 'cd', 'layer9': 'ad', 'layer10': 'rd', 'layer11': 'cv', 'layer12': 'cd', 'layer13': 'ad', 'layer14': 'rd', 'layer15': 'cv'} initialization done

conv weights: lr 0.005000, wd 0.000100 bn weights: lr 0.005000, wd 0.000010 relu weights: lr 0.005000, wd 0.000000 cuda is not used, the running might be slow => loading checkpoint from 'trained_models/table7_pidinet.pth' => loaded checkpoint 'trained_models/table7_pidinet.pth' successfully {'layer0': 'cd', 'layer1': 'ad', 'layer2': 'rd', 'layer3': 'cv', 'layer4': 'cd', 'layer5': 'ad', 'layer6': 'rd', 'layer7': 'cv', 'layer8': 'cd', 'layer9': 'ad', 'layer10': 'rd', 'layer11': 'cv', 'layer12': 'cd', 'layer13': 'ad', 'layer14': 'rd', 'layer15': 'cv'}

Traceback (most recent call last): File "/Users/jan/Documents/ML/pidinet/main.py", line 418, in <module> main(f) File "/Users/jan/Documents/ML/pidinet/main.py", line 201, in main model.load_state_dict(convert_pidinet(checkpoint['state_dict'], args.config)) File "/opt/homebrew/Caskroom/miniforge/base/envs/pidinet/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1482, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for PiDiNet: Missing key(s) in state_dict: …

MacBook with M1 running OSX 12.1. Any chance to get it up and running here?

zhuoinoulu commented 2 years ago

Hi, sorry for the late reply. Have you solved that? Please try to output the keys of "checkpoint"?

krummrey commented 2 years ago

Sadly, no I haven't been able to get it up and running. Namespace(savedir='img_out/', datadir='data/BSDS500', only_bsds=False, ablation=False, dataset=['BSDS'], model='pidinet_converted', sa=True, dil=True, config='carv4', seed=1653465498, gpu='-1', checkinfo=False, epochs=20, iter_size=24, lr=0.005, lr_type='multistep', lr_steps=None, opt='adam', wd=0.0001, workers=4, eta=0.3, lmbda=1.1, resume=False, print_freq=10, save_freq=1, evaluate='trained_models/table7_pidinet.pth', evaluate_converted=True, use_cuda=False) {'layer0': 'cd', 'layer1': 'ad', 'layer2': 'rd', 'layer3': 'cv', 'layer4': 'cd', 'layer5': 'ad', 'layer6': 'rd', 'layer7': 'cv', 'layer8': 'cd', 'layer9': 'ad', 'layer10': 'rd', 'layer11': 'cv', 'layer12': 'cd', 'layer13': 'ad', 'layer14': 'rd', 'layer15': 'cv'} initialization done conv weights: lr 0.005000, wd 0.000100 bn weights: lr 0.005000, wd 0.000010 relu weights: lr 0.005000, wd 0.000000 cuda is not used, the running might be slow Threshold for ground truth: 76.800000 on BSDS_VOC Threshold for ground truth: 76.800000 on BSDS_VOC => loading checkpoint from 'trained_models/table7_pidinet.pth' => loaded checkpoint 'trained_models/table7_pidinet.pth' successfully {'layer0': 'cd', 'layer1': 'ad', 'layer2': 'rd', 'layer3': 'cv', 'layer4': 'cd', 'layer5': 'ad', 'layer6': 'rd', 'layer7': 'cv', 'layer8': 'cd', 'layer9': 'ad', 'layer10': 'rd', 'layer11': 'cv', 'layer12': 'cd', 'layer13': 'ad', 'layer14': 'rd', 'layer15': 'cv'} Traceback (most recent call last): File "/Users/jan/Documents/ML/pidinet/main.py", line 418, in <module> main(f) File "/Users/jan/Documents/ML/pidinet/main.py", line 201, in main model.load_state_dict(convert_pidinet(checkpoint['state_dict'], args.config)) File "/opt/homebrew/Caskroom/miniforge/base/envs/pidinet/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1482, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for PiDiNet: Missing key(s) in state_dict: "init_block.weight", "block1_1.conv1.weight", "block1_1.conv2.weight", "block1_2.conv1.weight", "block1_2.conv2.weight", "block1_3.conv1.weight", "block1_3.conv2.weight", "block2_1.shortcut.weight", "block2_1.shortcut.bias", "block2_1.conv1.weight", "block2_1.conv2.weight", "block2_2.conv1.weight", "block2_2.conv2.weight", "block2_3.conv1.weight", "block2_3.conv2.weight", "block2_4.conv1.weight", "block2_4.conv2.weight", "block3_1.shortcut.weight", "block3_1.shortcut.bias", "block3_1.conv1.weight", "block3_1.conv2.weight", "block3_2.conv1.weight", "block3_2.conv2.weight", "block3_3.conv1.weight", "block3_3.conv2.weight", "block3_4.conv1.weight", "block3_4.conv2.weight", "block4_1.shortcut.weight", "block4_1.shortcut.bias", "block4_1.conv1.weight", "block4_1.conv2.weight", "block4_2.conv1.weight", "block4_2.conv2.weight", "block4_3.conv1.weight", "block4_3.conv2.weight", "block4_4.conv1.weight", "block4_4.conv2.weight", "conv_reduces.0.conv.weight", "conv_reduces.0.conv.bias", "conv_reduces.1.conv.weight", "conv_reduces.1.conv.bias", "conv_reduces.2.conv.weight", "conv_reduces.2.conv.bias", "conv_reduces.3.conv.weight", "conv_reduces.3.conv.bias", "attentions.0.conv1.weight", "attentions.0.conv1.bias", "attentions.0.conv2.weight", "attentions.1.conv1.weight", "attentions.1.conv1.bias", "attentions.1.conv2.weight", "attentions.2.conv1.weight", "attentions.2.conv1.bias", "attentions.2.conv2.weight", "attentions.3.conv1.weight", "attentions.3.conv1.bias", "attentions.3.conv2.weight", "dilations.0.conv1.weight", "dilations.0.conv1.bias", "dilations.0.conv2_1.weight", "dilations.0.conv2_2.weight", "dilations.0.conv2_3.weight", "dilations.0.conv2_4.weight", "dilations.1.conv1.weight", "dilations.1.conv1.bias", "dilations.1.conv2_1.weight", "dilations.1.conv2_2.weight", "dilations.1.conv2_3.weight", "dilations.1.conv2_4.weight", "dilations.2.conv1.weight", "dilations.2.conv1.bias", "dilations.2.conv2_1.weight", "dilations.2.conv2_2.weight", "dilations.2.conv2_3.weight", "dilations.2.conv2_4.weight", "dilations.3.conv1.weight", "dilations.3.conv1.bias", "dilations.3.conv2_1.weight", "dilations.3.conv2_2.weight", "dilations.3.conv2_3.weight", "dilations.3.conv2_4.weight", "classifier.weight", "classifier.bias". Unexpected key(s) in state_dict: "module.init_block.weight", "module.block1_1.conv1.weight", "module.block1_1.conv2.weight", "module.block1_2.conv1.weight", "module.block1_2.conv2.weight", "module.block1_3.conv1.weight", "module.block1_3.conv2.weight", "module.block2_1.shortcut.weight", "module.block2_1.shortcut.bias", "module.block2_1.conv1.weight", "module.block2_1.conv2.weight", "module.block2_2.conv1.weight", "module.block2_2.conv2.weight", "module.block2_3.conv1.weight", "module.block2_3.conv2.weight", "module.block2_4.conv1.weight", "module.block2_4.conv2.weight", "module.block3_1.shortcut.weight", "module.block3_1.shortcut.bias", "module.block3_1.conv1.weight", "module.block3_1.conv2.weight", "module.block3_2.conv1.weight", "module.block3_2.conv2.weight", "module.block3_3.conv1.weight", "module.block3_3.conv2.weight", "module.block3_4.conv1.weight", "module.block3_4.conv2.weight", "module.block4_1.shortcut.weight", "module.block4_1.shortcut.bias", "module.block4_1.conv1.weight", "module.block4_1.conv2.weight", "module.block4_2.conv1.weight", "module.block4_2.conv2.weight", "module.block4_3.conv1.weight", "module.block4_3.conv2.weight", "module.block4_4.conv1.weight", "module.block4_4.conv2.weight", "module.conv_reduces.0.conv.weight", "module.conv_reduces.0.conv.bias", "module.conv_reduces.1.conv.weight", "module.conv_reduces.1.conv.bias", "module.conv_reduces.2.conv.weight", "module.conv_reduces.2.conv.bias", "module.conv_reduces.3.conv.weight", "module.conv_reduces.3.conv.bias", "module.attentions.0.conv1.weight", "module.attentions.0.conv1.bias", "module.attentions.0.conv2.weight", "module.attentions.1.conv1.weight", "module.attentions.1.conv1.bias", "module.attentions.1.conv2.weight", "module.attentions.2.conv1.weight", "module.attentions.2.conv1.bias", "module.attentions.2.conv2.weight", "module.attentions.3.conv1.weight", "module.attentions.3.conv1.bias", "module.attentions.3.conv2.weight", "module.dilations.0.conv1.weight", "module.dilations.0.conv1.bias", "module.dilations.0.conv2_1.weight", "module.dilations.0.conv2_2.weight", "module.dilations.0.conv2_3.weight", "module.dilations.0.conv2_4.weight", "module.dilations.1.conv1.weight", "module.dilations.1.conv1.bias", "module.dilations.1.conv2_1.weight", "module.dilations.1.conv2_2.weight", "module.dilations.1.conv2_3.weight", "module.dilations.1.conv2_4.weight", "module.dilations.2.conv1.weight", "module.dilations.2.conv1.bias", "module.dilations.2.conv2_1.weight", "module.dilations.2.conv2_2.weight", "module.dilations.2.conv2_3.weight", "module.dilations.2.conv2_4.weight", "module.dilations.3.conv1.weight", "module.dilations.3.conv1.bias", "module.dilations.3.conv2_1.weight", "module.dilations.3.conv2_2.weight", "module.dilations.3.conv2_3.weight", "module.dilations.3.conv2_4.weight", "module.classifier.weight", "module.classifier.bias".