LPXTT / GradNet-Tensorflow

The code of GradNet based on Tensorflow
73 stars 14 forks source link

how do you get the pretrained backbone of the search branch? #5

Open fzh0917 opened 4 years ago

fzh0917 commented 4 years ago

Hi, Peixia Li, In the file train.py, at line 463, the code loads pretrained weights ./ckpt/base_256/model_epoch45.ckpt for the search branch and then freeze them throughout the training procedure. As the title says, would you please tell me that how do you get the pretrained backbone of the search branch? Is it got from the original SiamFC or somewhere else? I am looking forward your reply. Thanks.

LPXTT commented 4 years ago

Hi Zhihong,

I am sorry for the late reply. The model is pretrained with the same method as SiamFC on VID.

Best Regards, Peixia

在 2020年3月17日,下午11:47,Zhihong Fu notifications@github.com 写道:

Hi, Peixia Li, In the file train.py, at line 463, the code loads pretrained weights ./ckpt/base_256/model_epoch45.ckpt for the search branch and then freeze them throughout the training procedure. As the title says, would you please tell me that how do you get the pretrained backbone of the search branch? Is it got from the original SiamFC or somewhere else? I am looking forward your reply. Thanks.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/LPXTT/GradNet-Tensorflow/issues/5, or unsubscribe https://github.com/notifications/unsubscribe-auth/AICKQE5OVGIXWUC25DOAJ23RH5WNLANCNFSM4LNMNFKQ.

fzh0917 commented 4 years ago

OK, I guess so. However, the one part named as base_256 of the pretrained model path makes me curious, why it's 256? what the number 256 represents? Thank you for your reply.

INTOUCHABLE-VS commented 3 years ago

志宏您好,抱歉回复晚了。该模型使用与 SiamFC 在 VID 上相同的方法进行预训练。最好的问候,佩霞 在 2020年3月17日,下午11:47,Zhihong Fu @.***> 写道: Hi, Peixia Li, In the file train.py, at line 463, the code loads pretrained weights ./ckpt/base_256/model_epoch45.ckpt for the search branch and then freeze them throughout the training procedure. As the title says, would you please tell me that how do you get the pretrained backbone of the search branch? Is it got from the original SiamFC or somewhere else? I am looking forward your reply. Thanks. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#5>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AICKQE5OVGIXWUC25DOAJ23RH5WNLANCNFSM4LNMNFKQ.

您好,我直接使用了SiamFC在VID上的预训练模型作为 ./ckpt/base_256/model_epoch45.ckpt ,l1 和 l2 损失都会降的比原模型低,但是效果会比您提供的模型差很多,不知道是什么原因?而且直接用SiamFC在VID上预训练的模型的参数量是您提供的 model_epoch45.ckpt 的参数量的一半,不知道您的模型是怎么获得的,可否提供代码或者方法?期待您的答复,谢谢

LPXTT commented 3 years ago

志宏您好,抱歉回复晚了。该模型使用与 SiamFC 在 VID 上相同的方法进行预训练。最好的问候,佩霞 在 2020年3月17日,下午11:47,Zhihong Fu @.***> 写道: Hi, Peixia Li, In the file train.py, at line 463, the code loads pretrained weights ./ckpt/base_256/model_epoch45.ckpt for the search branch and then freeze them throughout the training procedure. As the title says, would you please tell me that how do you get the pretrained backbone of the search branch? Is it got from the original SiamFC or somewhere else? I am looking forward your reply. Thanks. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#5>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AICKQE5OVGIXWUC25DOAJ23RH5WNLANCNFSM4LNMNFKQ.

您好,我直接使用了SiamFC在VID上的预训练模型作为 ./ckpt/base_256/model_epoch45.ckpt ,l1 和 l2 损失都会降的比原模型低,但是效果会比您提供的模型差很多,不知道是什么原因?而且直接用SiamFC在VID上预训练的模型的参数量是您提供的 model_epoch45.ckpt 的参数量的一半,不知道您的模型是怎么获得的,可否提供代码或者方法?期待您的答复,谢谢

您好,时间有点久,代码的细节我也有点记不太清了,非常抱歉。我记得我没有改过特征提取的网络结构,你看我的网络结构跟他原来的网络结构定义应该是一样的吧,至于为啥存下来的模型参数量不一样,这个会不会因为数据格式不同造成的。我的预训练代码应该是找不到了,你那里如果有预训练的代码,可以按照本文的训练超参设计,再重新训练一个siamFC,可能训练的超参跟原来不太一样?

INTOUCHABLE-VS commented 3 years ago

志宏您好,抱歉回复晚了。该模型使用与 SiamFC 在 VID 上相同的方法进行预训练。最好的问候,佩霞 在 2020年3月17日,下午11:47,Zhihong Fu @.***> 写道: Hi, Peixia Li, In the file train.py, at line 463, the code loads pretrained weights ./ckpt/base_256/model_epoch45.ckpt for the search branch and then freeze them throughout the training procedure. As the title says, would you please tell me that how do you get the pretrained backbone of the search branch? Is it got from the original SiamFC or somewhere else? I am looking forward your reply. Thanks. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#5>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AICKQE5OVGIXWUC25DOAJ23RH5WNLANCNFSM4LNMNFKQ.

您好,我直接使用了SiamFC在VID上的预训练模型作为 ./ckpt/base_256/model_epoch45.ckpt ,l1 和 l2 损失都会降的比原模型低,但是效果会比您提供的模型差很多,不知道是什么原因?而且直接用SiamFC在VID上预训练的模型的参数量是您提供的 model_epoch45.ckpt 的参数量的一半,不知道您的模型是怎么获得的,可否提供代码或者方法?期待您的答复,谢谢

您好,时间有点久,代码的细节我也有点记不太清了,非常抱歉。我记得我没有改过特征提取的网络结构,你看我的网络结构跟他原来的网络结构定义应该是一样的吧,至于为啥存下来的模型参数量不一样,这个会不会因为数据格式不同造成的。我的预训练代码应该是找不到了,你那里如果有预训练的代码,可以按照本文的训练超参设计,再重新训练一个siamFC,可能训练的超参跟原来不太一样?

好的,谢谢您的答复。我参考了您提到的SiamFC的代码 https://github.com/www0wwwjs1/tensorflow-siamese-fc 直接得到的预训练模型

fzh0917 commented 3 years ago

好的,谢谢学姐。我后来用pytorch复现出来了。性能挺好的,泛化能力也很强。 祝学姐在澳大利亚一切顺利,每天过得开心!

Zhihong Fu


From: INTOUCHABLE-VS @.> Sent: Monday, August 23, 2021 4:36:20 PM To: LPXTT/GradNet-Tensorflow @.> Cc: Zhihong Fu @.>; Author @.> Subject: Re: [LPXTT/GradNet-Tensorflow] how do you get the pretrained backbone of the search branch? (#5)

志宏您好,抱歉回复晚了。该模型使用与 SiamFC 在 VID 上相同的方法进行预训练。最好的问候,佩霞 … 在 2020年3月17日,下午11:47,Zhihong Fu @.***> 写道: Hi, Peixia Li, In the file train.py, at line 463, the code loads pretrained weights ./ckpt/base_256/model_epoch45.ckpt for the search branch and then freeze them throughout the training procedure. As the title says, would you please tell me that how do you get the pretrained backbone of the search branch? Is it got from the original SiamFC or somewhere else? I am looking forward your reply. Thanks. ― You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#5https://github.com/LPXTT/GradNet-Tensorflow/issues/5>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AICKQE5OVGIXWUC25DOAJ23RH5WNLANCNFSM4LNMNFKQ.

您好,我直接使用了SiamFC在VID上的预训练模型作为 ./ckpt/base_256/model_epoch45.ckpt ,l1 和 l2 损失都会降的比原模型低,但是效果会比您提供的模型差很多,不知道是什么原因?而且直接用SiamFC在VID上预训练的模型的参数量是您提供的 model_epoch45.ckpt 的参数量的一半,不知道您的模型是怎么获得的,可否提供代码或者方法?期待您的答复,谢谢

您好,时间有点久,代码的细节我也有点记不太清了,非常抱歉。我记得我没有改过特征提取的网络结构,你看我的网络结构跟他原来的网络结构定义应该是一样的吧,至于为啥存下来的模型参数量不一样,这个会不会因为数据格式不同造成的。我的预训练代码应该是找不到了,你那里如果有预训练的代码,可以按照本文的训练超参设计,再重新训练一个siamFC,可能训练的超参跟原来不太一样?

好的,谢谢您的答复。我参考了您提到的SiamFC的代码 https://github.com/www0wwwjs1/tensorflow-siamese-fc 直接得到的预训练模型

― You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/LPXTT/GradNet-Tensorflow/issues/5#issuecomment-903559283, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AFP5FU7KEMEAY7SO6YLOAYDT6ICAJANCNFSM4LNMNFKQ. Triage notifications on the go with GitHub Mobile for iOShttps://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Androidhttps://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email.

Hiyao-yy commented 2 years ago

您好,可以求一份您复现的pytorch代码吗,谢谢

LPXTT commented 2 years ago

您好,非常抱歉,我们现在没有pytorch的版本。

在 2022年9月18日,下午5:11,张馨月 @.***> 写道:

您好,可以求一份您复现的pytorch代码吗,谢谢

— Reply to this email directly, view it on GitHub https://github.com/LPXTT/GradNet-Tensorflow/issues/5#issuecomment-1250208000, or unsubscribe https://github.com/notifications/unsubscribe-auth/AICKQE2P4XJZHACGTVTVFJTV626CXANCNFSM4LNMNFKQ. You are receiving this because you commented.