researchmm / SiamDW

[CVPR'19 Oral] Deeper and Wider Siamese Networks for Real-Time Visual Tracking
http://openaccess.thecvf.com/content_CVPR_2019/html/Zhang_Deeper_and_Wider_Siamese_Networks_for_Real-Time_Visual_Tracking_CVPR_2019_paper.html
MIT License
750 stars 180 forks source link

The results of the training model did not meet expectations? #20

Closed haizhongli closed 5 years ago

haizhongli commented 5 years ago

Thanks for your code fisrt, it help so much. According to the configuration process you said, I successfully ran the training code of SiamFCRes22 network with CIResNet22_PRETRAIN.model as pretrained model. In the SamFC. yaml file, the only thing I changed was the path of data. And because of the limited conditions, the number of my GPU is one. The training code I run is as follows: python ./siamese_tracking/train_siamfc.py --cfg experiments/train/SiamFC.yaml --gpus 0,1,2,3 --workers 32 2>&1 | tee logs/siamfc_train.log But the best result of training 50 epochs is only about 0.62. I don't know what the problem is. I would like to ask if there is any inconsistency or neglect that caused this problem.

Thank you.

JudasDie commented 5 years ago

Thanks for your code fisrt, it help so much. According to the configuration process you said, I successfully ran the training code of SiamFCRes22 network with CIResNet22_PRETRAIN.model as pretrained model. In the SamFC. yaml file, the only thing I changed was the path of data. And because of the limited conditions, the number of my GPU is one. The training code I run is as follows: python ./siamese_tracking/train_siamfc.py --cfg experiments/train/SiamFC.yaml --gpus 0,1,2,3 --workers 32 2>&1 | tee logs/siamfc_train.log But the best result of training 50 epochs is only about 0.62. I don't know what the problem is. I would like to ask if there is any inconsistency or neglect that caused this problem.

Thank you.

  1. GPU number is a random factor, which I believe there is no large gap between different numbers (maybe 0.5 points).
  2. 0.62 is not good but also not so bad. I get 0.635 - 0.645 generally with default parameters (the default parameters are randomly set).
  3. I found you did not tuning hyper-parameter on your test data. Please do it according to readme.md.
  4. 600 - 1000 group parameters can show a favorable result. However, tuning with one card may take a long time.
  5. You can write a script to extract information from tuning-log to find best hypre-parameter. I'll also upload a script these days.
  6. Any other questions or disscussion, you can email zhangzhipeng2017@ia.ac.cn.
haizhongli commented 5 years ago

Thank you for your answer. I'll try the param_Tune operation.

JudasDie commented 5 years ago

Thank you for your answer. I'll try the param_Tune operation.

You'r welcome. I'll update a better pipeline and tuning toolkit recently. The tuning code here is for general videos, you can modify it to tuning on VOT or OTB (BTW, VOT takes less time). Or you can just wait for about three days. :)

Any other question? If not, the issue will be closed later.

haizhongli commented 5 years ago

That's great. I'll wait for your latest version. I don't have any problems for the time being.

JudasDie commented 5 years ago

That's great. I'll wait for your latest version. I don't have any problems for the time being.

Okay, the issue will be closed. Any futher talk, please email me.

hwpengms commented 5 years ago

It's great to know that you reproduced our results. @haizhongli Thanks @JudasDie for your patient work and helps!

zhu2014yi commented 5 years ago

Thanks for your code fisrt, it help so much. According to the configuration process you said, I successfully ran the training code of SiamFCRes22 network with CIResNet22_PRETRAIN.model as pretrained model. In the SamFC. yaml file, the only thing I changed was the path of data. And because of the limited conditions, the number of my GPU is one. The training code I run is as follows: python ./siamese_tracking/train_siamfc.py --cfg experiments/train/SiamFC.yaml --gpus 0,1,2,3 --workers 32 2>&1 | tee logs/siamfc_train.log But the best result of training 50 epochs is only about 0.62. I don't know what the problem is. I would like to ask if there is any inconsistency or neglect that caused this problem.

Thank you.

the results you trained is on otb2015?

haizhongli commented 5 years ago

The result I said was at OTB2013. On the VID, my training results are really not very good, but with GOT10K training, it is almost the same as the results shown by the author, quite stable.

------------------ 原始邮件 ------------------ 发件人: "zhu2014yi"notifications@github.com; 发送时间: 2019年7月5日(星期五) 上午10:28 收件人: "researchmm/SiamDW"SiamDW@noreply.github.com; 抄送: "﹎╁划濄∑淚"594387862@qq.com;"Mention"mention@noreply.github.com; 主题: Re: [researchmm/SiamDW] The results of the training model did not meet expectations? (#20)

Thanks for your code fisrt, it help so much. According to the configuration process you said, I successfully ran the training code of SiamFCRes22 network with CIResNet22_PRETRAIN.model as pretrained model. In the SamFC. yaml file, the only thing I changed was the path of data. And because of the limited conditions, the number of my GPU is one. The training code I run is as follows: python ./siamese_tracking/train_siamfc.py --cfg experiments/train/SiamFC.yaml --gpus 0,1,2,3 --workers 32 2>&1 | tee logs/siamfc_train.log But the best result of training 50 epochs is only about 0.62. I don't know what the problem is. I would like to ask if there is any inconsistency or neglect that caused this problem.

Thank you.

the results you trained is on otb2015?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

zhu2014yi commented 5 years ago

The result I said was at OTB2013. On the VID, my training results are really not very good, but with GOT10K training, it is almost the same as the results shown by the author, quite stable. ------------------ 原始邮件 ------------------ 发件人: "zhu2014yi"notifications@github.com; 发送时间: 2019年7月5日(星期五) 上午10:28 收件人: "researchmm/SiamDW"SiamDW@noreply.github.com; 抄送: "﹎╁划濄∑淚"594387862@qq.com;"Mention"mention@noreply.github.com; 主题: Re: [researchmm/SiamDW] The results of the training model did not meet expectations? (#20) Thanks for your code fisrt, it help so much. According to the configuration process you said, I successfully ran the training code of SiamFCRes22 network with CIResNet22_PRETRAIN.model as pretrained model. In the SamFC. yaml file, the only thing I changed was the path of data. And because of the limited conditions, the number of my GPU is one. The training code I run is as follows: python ./siamese_tracking/train_siamfc.py --cfg experiments/train/SiamFC.yaml --gpus 0,1,2,3 --workers 32 2>&1 | tee logs/siamfc_train.log But the best result of training 50 epochs is only about 0.62. I don't know what the problem is. I would like to ask if there is any inconsistency or neglect that caused this problem. Thank you. the results you trained is on otb2015? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

I also use the GOT-10K to train the SiamFCRes22 but I only got the 0.646 on OTB2013 (best at epoch 39 )do you change something or tune the hyperparameter (use tune_tpe.py) , also the got-10k has 4 error-marked video have you corrected it .I also train the SiamFCRes22W(on GOT10K) but I got the 0.659 on otb 2013 (best at epoch 43)which is lower than author's SiamFCRes22W,do you have try it?

haizhongli commented 5 years ago

haven't changed anything, and I'm using the previous version, which is the simultaneous training version of all the parameters in features. The best one is 0.658 on OTB2013, and the corresponding epoch number is checkpoint_39. The difference in our results may be that the configuration of the computer GPU, numpy or CV2 versions are inconsistent. SiamFC is very sensitive to parameters, but the results are good now. You can just fine-tune it again.

------------------ 原始邮件 ------------------ 发件人: "zhu2014yi"notifications@github.com; 发送时间: 2019年7月6日(星期六) 上午9:47 收件人: "researchmm/SiamDW"SiamDW@noreply.github.com; 抄送: "﹎╁划濄∑淚"594387862@qq.com;"Mention"mention@noreply.github.com; 主题: Re: [researchmm/SiamDW] The results of the training model did not meet expectations? (#20)

The result I said was at OTB2013. On the VID, my training results are really not very good, but with GOT10K training, it is almost the same as the results shown by the author, quite stable. … ------------------ 原始邮件 ------------------ 发件人: "zhu2014yi"notifications@github.com; 发送时间: 2019年7月5日(星期五) 上午10:28 收件人: "researchmm/SiamDW"SiamDW@noreply.github.com; 抄送: "﹎╁划濄∑淚"594387862@qq.com;"Mention"mention@noreply.github.com; 主题: Re: [researchmm/SiamDW] The results of the training model did not meet expectations? (#20) Thanks for your code fisrt, it help so much. According to the configuration process you said, I successfully ran the training code of SiamFCRes22 network with CIResNet22_PRETRAIN.model as pretrained model. In the SamFC. yaml file, the only thing I changed was the path of data. And because of the limited conditions, the number of my GPU is one. The training code I run is as follows: python ./siamese_tracking/train_siamfc.py --cfg experiments/train/SiamFC.yaml --gpus 0,1,2,3 --workers 32 2>&1 | tee logs/siamfc_train.log But the best result of training 50 epochs is only about 0.62. I don't know what the problem is. I would like to ask if there is any inconsistency or neglect that caused this problem. Thank you. the results you trained is on otb2015? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

I also use the GOT-10K to train the SiamFCRes22 but I only got the 0.646 on OTB2013 (best at epoch 39 )do you change something or tune the hyperparameter (use tune_tpe.py) , also the got-10k has 4 error-marked video have you corrected it .I also train the SiamFCRes22W(on GOT10K) but I got the 0.659 on otb 2013 (best at epoch 43)which is lower than author's SiamFCRes22W,do you have try it?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

zhu2014yi commented 5 years ago

haven't changed anything, and I'm using the previous version, which is the simultaneous training version of all the parameters in features. The best one is 0.658 on OTB2013, and the corresponding epoch number is checkpoint_39. The difference in our results may be that the configuration of the computer GPU, numpy or CV2 versions are inconsistent. SiamFC is very sensitive to parameters, but the results are good now. You can just fine-tune it again. ------------------ 原始邮件 ------------------ 发件人: "zhu2014yi"notifications@github.com; 发送时间: 2019年7月6日(星期六) 上午9:47 收件人: "researchmm/SiamDW"SiamDW@noreply.github.com; 抄送: "﹎╁划濄∑淚"594387862@qq.com;"Mention"mention@noreply.github.com; 主题: Re: [researchmm/SiamDW] The results of the training model did not meet expectations? (#20) The result I said was at OTB2013. On the VID, my training results are really not very good, but with GOT10K training, it is almost the same as the results shown by the author, quite stable. … ------------------ 原始邮件 ------------------ 发件人: "zhu2014yi"notifications@github.com; 发送时间: 2019年7月5日(星期五) 上午10:28 收件人: "researchmm/SiamDW"SiamDW@noreply.github.com; 抄送: "﹎╁划濄∑淚"594387862@qq.com;"Mention"mention@noreply.github.com; 主题: Re: [researchmm/SiamDW] The results of the training model did not meet expectations? (#20) Thanks for your code fisrt, it help so much. According to the configuration process you said, I successfully ran the training code of SiamFCRes22 network with CIResNet22_PRETRAIN.model as pretrained model. In the SamFC. yaml file, the only thing I changed was the path of data. And because of the limited conditions, the number of my GPU is one. The training code I run is as follows: python ./siamese_tracking/train_siamfc.py --cfg experiments/train/SiamFC.yaml --gpus 0,1,2,3 --workers 32 2>&1 | tee logs/siamfc_train.log But the best result of training 50 epochs is only about 0.62. I don't know what the problem is. I would like to ask if there is any inconsistency or neglect that caused this problem. Thank you. the results you trained is on otb2015? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread. I also use the GOT-10K to train the SiamFCRes22 but I only got the 0.646 on OTB2013 (best at epoch 39 )do you change something or tune the hyperparameter (use tune_tpe.py) , also the got-10k has 4 error-marked video have you corrected it .I also train the SiamFCRes22W(on GOT10K) but I got the 0.659 on otb 2013 (best at epoch 43)which is lower than author's SiamFCRes22W,do you have try it? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

thanks!