HonglinChu / SiamTrackers

(2020-2022)The PyTorch version of SiamFC,SiamRPN,DaSiamRPN, UpdateNet , SiamDW, SiamRPN++, SiamMask, SiamFC++, SiamCAR, SiamBAN, Ocean, LightTrack , TrTr, NanoTrack; Visual object tracking based on deep learning
Apache License 2.0
1.33k stars 268 forks source link

demo测试出错 #139

Closed Guojiajing1 closed 4 months ago

Guojiajing1 commented 11 months ago

想请教一下我一开始就直接跑的demo.py程序,但是它报了这个错误Traceback (most recent call last): File "E:\Tools_project\python_project\DL\code\SiamTrackers-master\NanoTrack\bin\demo.py", line 144, in main() File "E:\Tools_project\python_project\DL\code\SiamTrackers-master\NanoTrack\bin\demo.py", line 83, in main model = load_pretrain(model, args.snapshot).cuda().eval() File "E:\Tools_project\python_project\DL\code\SiamTrackers-master\NanoTrack\nanotrack\utils\model_load.py", line 70, in load_pretrain check_keys(model, pretrained_dict) File "E:\Tools_project\python_project\DL\code\SiamTrackers-master\NanoTrack\nanotrack\utils\model_load.py", line 32, in check_keys assert len(used_pretrained_keys) > 0, \ AssertionError: load NONE from pretrained checkpoint,可是这个权重的文件不是已经存在了吗?

HonglinChu commented 11 months ago

把这里的snapshot设置成绝对路径试试, 在终端cd xxx/xxx/NanoTrack 目录,然后在命令行运行脚本 image

Guojiajing1 commented 11 months ago

采用了你的方法,但是不行,在一开始没有动的情况下会显示RuntimeError: Error(s) in loading state_dict for ModelBuilder: size mismatch for ban_head.corr_pw_reg.conv_kernel.0.weight: copying a param with shape torch.Size([48, 48, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 1, 1]). size mismatch for ban_head.corr_pw_reg.conv_kernel.0.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for ban_head.corr_pw_reg.conv_kernel.1.weight: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for ban_head.corr_pw_reg.conv_kernel.1.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for ban_head.corr_pw_reg.conv_kernel.1.running_mean: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]). size mismatch for ban_head.corr_pw_reg.conv_kernel.1.running_var: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]). 但是当我添加了model = nn.DataParallel(model).cuda()后而路径改为绝对路径就是报错Traceback (most recent call last): File "demo.py", line 156, in main() File "demo.py", line 95, in main model = load_pretrain(model, '/home/code/NanoTrack/models/pretrained/nanotrackv2.pth').cuda().eval() #erro File "/home/code/NanoTrack/bin/../nanotrack/utils/model_load.py", line 70, in load_pretrain check_keys(model, pretrained_dict) File "/home/code/NanoTrack/bin/../nanotrack/utils/model_load.py", line 32, in check_keys assert len(used_pretrained_keys) > 0, \ AssertionError: load NONE from pretrained checkpoint 这个了,在这期间我查看了一下里面model(ModelBuilder)的信息和nanotrackv2.pth权重信息,这两个好像并不匹配,Warning: Keys in the model and checkpoint do not match. Missing keys: {'ban_head.bbox_tower.24.running_mean', 'ban_head.bbox_tower.11.num_batches_tracked', 'ban_head.corr_pw_cls.conv.1.num_batches_tracked', 'ban_head.corr_dw_reg.conv_search.1.running_mean', 'ban_head.bbox_tower.29.bias', 'ban_head.bbox_tower.26.running_mean', 'ban_head.corr_dw_reg.conv_kernel.0.weight', 'ban_head.bbox_tower.1.running_var', 'ban_head.corr_dw_cls.conv_kernel.0.bias', 'ban_head.cls_tower.24.bias', 'ban_head.corr_pw_reg.conv.0.weight', 'ban_head.cls_tower.19.weight', 'ban_head.corr_pw_reg.conv.1.running_var', 'ban_head.bbox_tower.21.running_var', 'ban_head.corr_dw_reg.conv_kernel.1.bias', 'ban_head.cls_tower.4.weight', 'ban_head.corr_dw_reg.conv_search.1.num_batches_tracked', 'ban_head.cls_tower.4.running_var', 'ban_head.cls_tower.1.running_var', 'ban_head.bbox_tower.8.weight', 'ban_head.bbox_tower.15.weight', 'ban_head.bbox_tower.4.running_mean', 'ban_head.corr_dw_cls.conv_search.0.weight', 'ban_head.corr_pw_cls.conv.3.weight', 'ban_head.corr_pw_cls.conv.4.weight', 'ban_head.corr_pw_reg.conv.4.num_batches_tracked', 'ban_head.cls_tower.16.bias', 'ban_head.cls_tower.19.running_var', 'ban_head.corr_pw_cls.conv.1.weight', 'ban_head.cls_tower.11.num_batches_tracked', 'ban_head.cls_tower.9.weight', 'ban_head.cls_tower.24.weight', 'ban_head.cls_tower.14.running_var', 'ban_head.cls_tower.1.running_mean', 'ban_head.cls_tower.13.weight', 'ban_head.down_reg.0.weight', 'ban_head.cls_tower.24.num_batches_tracked', 'ban_head.corr_dw_reg.conv_kernel.1.running_var', 'ban_head.cls_tower.10.weight', 'ban_head.bbox_tower.14.num_batches_tracked', 'ban_head.corr_pw_reg.conv.3.weight', 'ban_head.bbox_tower.1.running_mean', 'ban_head.cls_tower.14.num_batches_tracked', 'ban_head.bbox_tower.19.running_mean', 'ban_head.cls_tower.6.weight', 'ban_head.bbox_tower.28.weight', 'ban_head.cls_tower.18.weight', 'ban_head.cls_tower.4.running_mean', 'ban_head.cls_tower.6.bias', 'ban_head.corr_pw_cls.conv.4.num_batches_tracked', 'ban_head.bbox_tower.16.num_batches_tracked', 'ban_head.bbox_tower.14.bias', 'ban_head.cls_tower.25.weight', 'ban_head.corr_dw_reg.conv_search.0.bias', 'ban_head.corr_pw_reg.conv.4.running_mean', 'ban_head.corr_pw_cls.conv.4.running_var', 'ban_head.bbox_tower.9.running_var', 'ban_head.bbox_tower.6.num_batches_tracked', 'ban_head.corr_dw_reg.conv_search.1.bias', 'ban_head.cls_tower.14.bias', 'ban_head.bbox_tower.26.running_var', 'ban_head.cls_tower.28.weight', 'ban_head.bbox_tower.24.running_var', 'ban_head.cls_tower.29.running_var', 'ban_head.cls_tower.21.running_var', 'ban_head.bbox_tower.24.num_batches_tracked', 'ban_head.corr_dw_cls.conv_search.1.weight', 'ban_head.bbox_tower.19.weight', 'ban_head.bbox_tower.24.bias', 'ban_head.cls_tower.29.bias', 'ban_head.cls_tower.4.bias', 'ban_head.corr_pw_cls.conv.4.running_mean', 'ban_head.cls_tower.3.weight', 'ban_head.bbox_tower.6.running_var', 'ban_head.cls_tower.19.bias', 'ban_head.bbox_tower.13.weight', 'ban_head.corr_pw_reg.conv.4.running_var', 'ban_head.corr_pw_reg.conv.3.bias', 'ban_head.bbox_tower.4.bias', 'ban_head.cls_tower.16.num_batches_tracked', 'ban_head.cls_tower.14.running_mean', 'ban_head.cls_tower.29.running_mean', 'ban_head.bbox_tower.21.num_batches_tracked', 'ban_head.cls_tower.26.num_batches_tracked', 'ban_head.corr_pw_reg.conv.1.running_mean', 'ban_head.cls_tower.21.running_mean', 'ban_head.bbox_tower.29.num_batches_tracked', 'ban_head.cls_tower.6.num_batches_tracked', 'ban_head.corr_dw_cls.conv_kernel.1.weight', 'ban_head.cls_tower.6.running_mean', 'ban_head.cls_tower.11.running_mean', 'ban_head.bbox_tower.26.bias', 'ban_head.cls_tower.21.bias', 'ban_head.corr_dw_cls.conv_search.0.bias', 'ban_head.bbox_tower.0.weight', 'ban_head.corr_dw_reg.conv_kernel.1.num_batches_tracked', 'ban_head.cls_tower.20.weight', 'ban_head.corr_dw_reg.conv_search.1.running_var', 'ban_head.bbox_tower.6.bias', 'ban_head.cls_tower.26.running_mean', 'ban_head.bbox_tower.6.running_mean', 'ban_head.bbox_tower.1.weight', 'ban_head.bbox_tower.3.weight', 'ban_head.corr_pw_reg.conv.1.weight', 'ban_head.bbox_tower.16.weight', 'ban_head.bbox_tower.21.running_mean', 'ban_head.bbox_tower.16.running_mean', 'ban_head.cls_tower.29.num_batches_tracked', 'ban_head.cls_tower.19.running_mean', 'ban_head.corr_dw_reg.conv_search.1.weight', 'ban_head.corr_pw_reg.conv.1.num_batches_tracked', 'ban_head.cls_tower.15.weight', 'ban_head.cls_tower.26.weight', 'ban_head.cls_tower.9.running_mean', 'ban_head.cls_tower.16.running_var', 'ban_head.corr_pw_reg.conv.1.bias', 'ban_head.down_cls.0.weight', 'ban_head.cls_tower.9.num_batches_tracked', 'ban_head.cls_tower.8.weight', 'ban_head.bbox_tower.6.weight', 'ban_head.down_cls.0.bias', 'ban_head.corr_pw_cls.conv.1.running_mean', 'ban_head.bbox_tower.11.running_mean', 'ban_head.bbox_tower.19.num_batches_tracked', 'ban_head.cls_tower.24.running_mean', 'ban_head.bbox_tower.11.weight', 'ban_head.cls_tower.29.weight', 'ban_head.bbox_tower.25.weight', 'ban_head.bbox_tower.21.weight', 'ban_head.corr_dw_cls.conv_kernel.1.running_var', 'ban_head.corr_dw_cls.conv_search.1.running_var', 'ban_head.cls_tower.11.bias', 'ban_head.bbox_tower.29.running_var', 'ban_head.cls_tower.4.num_batches_tracked', 'ban_head.corr_dw_cls.conv_kernel.0.weight', 'ban_head.corr_dw_reg.conv_search.0.weight', 'ban_head.bbox_tower.19.bias', 'ban_head.bbox_tower.26.num_batches_tracked', 'ban_head.bbox_tower.21.bias', 'ban_head.bbox_tower.9.running_mean', 'ban_head.bbox_tower.9.bias', 'ban_head.corr_pw_cls.conv.3.bias', 'ban_head.corr_pw_reg.conv.4.bias', 'ban_head.corr_dw_cls.conv_search.1.running_mean', 'ban_head.corr_dw_reg.conv_kernel.0.bias', 'ban_head.bbox_tower.16.bias', 'ban_head.bbox_tower.10.weight', 'ban_head.corr_dw_cls.conv_kernel.1.bias', 'ban_head.corr_dw_reg.conv_kernel.1.weight', 'ban_head.corr_dw_cls.conv_search.1.num_batches_tracked', 'ban_head.bbox_tower.4.weight', 'ban_head.corr_pw_cls.conv.1.running_var', 'ban_head.bbox_tower.14.weight', 'ban_head.cls_tower.6.running_var', 'ban_head.cls_tower.16.running_mean', 'ban_head.bbox_tower.4.num_batches_tracked', 'ban_head.cls_tower.1.num_batches_tracked', 'ban_head.corr_dw_reg.conv_kernel.1.running_mean', 'ban_head.cls_tower.0.weight', 'ban_head.bbox_tower.1.num_batches_tracked', 'ban_head.cls_tower.9.running_var', 'ban_head.bbox_tower.14.running_var', 'ban_head.bbox_tower.16.running_var', 'ban_head.cls_tower.21.weight', 'ban_head.cls_tower.26.running_var', 'ban_head.bbox_tower.19.running_var', 'ban_head.bbox_tower.26.weight', 'ban_head.bbox_tower.29.weight', 'ban_head.cls_tower.1.bias', 'ban_head.bbox_tower.18.weight', 'ban_head.bbox_tower.5.weight', 'ban_head.cls_logits.0.bias', 'ban_head.bbox_tower.4.running_var', 'ban_head.corr_dw_cls.conv_search.1.bias', 'ban_head.cls_tower.23.weight', 'ban_head.cls_tower.24.running_var', 'ban_head.bbox_tower.14.running_mean', 'ban_head.cls_tower.11.weight', 'ban_head.cls_tower.21.num_batches_tracked', 'ban_head.bbox_tower.9.weight', 'ban_head.cls_tower.14.weight', 'ban_head.corr_pw_cls.conv.4.bias', 'ban_head.bbox_tower.1.bias', 'ban_head.bbox_tower.11.bias', 'ban_head.corr_dw_cls.conv_kernel.1.running_mean', 'ban_head.bbox_tower.23.weight', 'ban_head.corr_dw_cls.conv_kernel.1.num_batches_tracked', 'ban_head.cls_tower.11.running_var', 'ban_head.corr_pw_cls.conv.0.weight', 'ban_head.bbox_tower.24.weight', 'ban_head.cls_tower.19.num_batches_tracked', 'ban_head.cls_logits.0.weight', 'ban_head.cls_tower.1.weight', 'ban_head.cls_tower.26.bias', 'ban_head.cls_tower.9.bias', 'ban_head.bbox_tower.29.running_mean', 'ban_head.corr_pw_reg.conv.4.weight', 'ban_head.cls_tower.5.weight', 'ban_head.bbox_tower.9.num_batches_tracked', 'ban_head.cls_tower.16.weight', 'ban_head.bbox_tower.11.running_var', 'ban_head.bbox_tower.20.weight', 'ban_head.corr_pw_cls.conv.1.bias', 'ban_head.down_reg.0.bias'} Unexpected keys: {'ban_head.cls_pw_tower.22.running_mean', 'ban_head.bbox_pw_tower.0.weight', 'ban_head.bbox_pw_tower.2.running_mean', 'ban_head.cls_pw_tower.10.running_var', 'ban_head.cls_pw_tower.1.weight', 'ban_head.bbox_pw_tower.18.running_mean', 'ban_head.cls_pw_tower.22.running_var', 'ban_head.cls_pw_tower.18.bias', 'ban_head.cls_pw_tower.10.num_batches_tracked', 'ban_head.bbox_pw_tower.18.weight', 'ban_head.cls_pw_tower.8.weight', 'ban_head.cls_pw_tower.6.bias', 'ban_head.cls_pw_tower.10.running_mean', 'ban_head.bbox_pw_tower.4.weight', 'ban_head.bbox_pw_tower.16.weight', 'ban_head.bbox_pw_tower.18.running_var', 'ban_head.bbox_pw_tower.5.weight', 'ban_head.cls_pw_tower.14.running_var', 'ban_head.cls_pw_tower.12.weight', 'ban_head.cls_pw_tower.5.weight', 'ban_head.bbox_pw_tower.12.weight', 'ban_head.bbox_pw_tower.20.weight', 'ban_head.bbox_pw_tower.14.running_var', 'ban_head.cls_pw_tower.16.weight', 'ban_head.bbox_pw_tower.6.running_var', 'ban_head.bbox_pw_tower.2.weight', 'ban_head.bbox_pw_tower.6.num_batches_tracked', 'ban_head.bbox_pw_tower.10.num_batches_tracked', 'ban_head.bbox_pw_tower.21.weight', 'ban_head.cls_pred.0.weight', 'ban_head.bbox_pw_tower.22.running_var', 'ban_head.bbox_pw_tower.6.running_mean', 'ban_head.cls_pw_tower.2.weight', 'ban_head.cls_pw_tower.10.bias', 'ban_head.cls_pw_tower.18.running_mean', 'ban_head.cls_pw_tower.0.weight', 'ban_head.cls_pw_tower.6.weight', 'ban_head.bbox_pw_tower.22.running_mean', 'ban_head.bbox_pw_tower.18.bias', 'ban_head.cls_pw_tower.18.weight', 'ban_head.cls_pw_tower.22.num_batches_tracked', 'ban_head.cls_pw_tower.2.running_var', 'ban_head.bbox_pw_tower.9.weight', 'ban_head.cls_pw_tower.14.num_batches_tracked', 'ban_head.bbox_pw_tower.22.weight', 'ban_head.cls_pw_tower.20.weight', 'ban_head.cls_pw_tower.22.bias', 'ban_head.bbox_pw_tower.14.running_mean', 'ban_head.bbox_pw_tower.10.running_mean', 'ban_head.cls_pw_tower.2.num_batches_tracked', 'ban_head.bbox_pw_tower.13.weight', 'ban_head.cls_pw_tower.18.num_batches_tracked', 'ban_head.bbox_pw_tower.18.num_batches_tracked', 'ban_head.bbox_pw_tower.14.bias', 'ban_head.bbox_pw_tower.2.running_var', 'ban_head.bbox_pw_tower.22.bias', 'ban_head.cls_pred.0.bias', 'ban_head.bbox_pw_tower.22.num_batches_tracked', 'ban_head.cls_pw_tower.21.weight', 'ban_head.cls_pw_tower.17.weight', 'ban_head.bbox_pw_tower.6.weight', 'ban_head.cls_pw_tower.6.num_batches_tracked', 'ban_head.bbox_pw_tower.2.bias', 'ban_head.cls_pw_tower.22.weight', 'ban_head.bbox_pw_tower.8.weight', 'ban_head.bbox_pw_tower.10.running_var', 'ban_head.cls_pw_tower.14.bias', 'ban_head.bbox_pw_tower.10.weight', 'ban_head.bbox_pw_tower.6.bias', 'ban_head.bbox_pw_tower.14.weight', 'ban_head.cls_pw_tower.6.running_mean', 'ban_head.cls_pw_tower.4.weight', 'ban_head.cls_pw_tower.9.weight', 'ban_head.cls_pw_tower.13.weight', 'ban_head.cls_pw_tower.6.running_var', 'ban_head.cls_pw_tower.2.running_mean', 'ban_head.cls_pw_tower.18.running_var', 'ban_head.cls_pw_tower.14.running_mean', 'ban_head.bbox_pw_tower.14.num_batches_tracked', 'ban_head.cls_pw_tower.2.bias', 'ban_head.bbox_pw_tower.2.num_batches_tracked', 'ban_head.bbox_pw_tower.10.bias', 'ban_head.bbox_pw_tower.17.weight', 'ban_head.cls_pw_tower.14.weight', 'ban_head.bbox_pw_tower.1.weight', 'ban_head.cls_pw_tower.10.weight'} 请问这个训练的权重是不可用吗,我现在该改哪里 1

Guojiajing1 commented 10 months ago

你好,我最近在实现你的工程,首先对于之前的问题我是修改了配置文件得到的解决,我想要去训练模型,但是在图像剪裁中出现了问题:par_crop.py程序运行成功但是未能生成图像对,生成的是空文件夹。 我做了以下工作:(1)由于数据量很大,刚开始我只取了val数据集下的前108组作为训练集其中36组作为验证集,train中数据按照GOT10K/train/GOT-10k_Train_000001到GOT-10k_Train_000108命名,val类似(2)在GOT10K同目录下的par_crop.py中修改路径got10k_base_path这里写的是原始GOT10K的路径,然后运行:由于par_crop.py中的main需要在终端手动输入两个参数,我尝试了在终端输入python par_crop.py 511 2(这个我试过好几个),也尝试了将if main中直接给指令main(511,1)

这是我运行的结果,我想我运行起来并没有任何问题

结果只生成了下面那种图形式

随后我想可能数据集比较小,我尝试了将原来的Train数据集中的前1000组数据进行剪裁,在此之前我也首先运行了parser_got10k.py生成了json文件,然后在运行这个.py文件时发现出现了问题,我不知道是哪里出现了错误,想请教一下,谢谢!!

@.***

 

------------------ 原始邮件 ------------------ 发件人: "HonglinChu/SiamTrackers" @.>; 发送时间: 2023年12月7日(星期四) 中午1:02 @.>; @.**@.>; 主题: Re: [HonglinChu/SiamTrackers] demo测试出错 (Issue #139)

把这里的snapshot设置成绝对路径试试, 在终端cd xxx/xxx/NanoTrack 目录,然后在命令行运行脚本 image.png (view on web)

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

HonglinChu commented 10 months ago

image 另外这里面有一些路径是需要自己修改的 image

按照文件步骤来,271改成511,另外建议你单步debug一下,看看出错的是在哪一行,生成空文件夹大概率就是路径不对