Open HelloRicky123 opened 5 years ago
Could you open source your code? I tried to implement the SiamRPN without imagenet pretrain or Youtube and only got 0.22 auc at OTB2015
So, can you provide your accuracy on OTB or VOT? I reimplement the paper and use the same network with you, without imagenet pretrain or Youtube-BB dataset and get at most 0.52 auc on OTB2013.
how about the VOT EAO?
how to test the pretrained model on OTB?
@HelloRicky123 Hi, I can get 0.19 on VOT2018 with a modification based on your code. But I don't know if they could make it better.
@leeyeehoo You mean my code (https://github.com/HelloRicky123/Siamese-RPN) without YT-BB dataset? The paper got 0.243 on VOT2017, and SiamFC got 0.182 on VOT2017. So it seems that 0.19 on VOT2018 is good. Could you tell me your modification?
@HelloRicky123 Hi, I modified your loss function. In my opinion, they could train the tracker on the testing dataset because there are too many successful cases on difficult frames that cannot be explained.
Could you send me your modified code at zhangruiqi429@gmail.com?I will test it on the vot2015. I think they may adjust the super parameters carefully that make this difficult frames tracked successfully, but not robust. Tracking is far from realistic world now.------------------ Original ------------------From: Yuhong Li notifications@github.comDate: Sat,Jan 26,2019 2:41 PMTo: songdejia/Siamese-RPN-pytorch Siamese-RPN-pytorch@noreply.github.comCc: HelloRicky123 qftju12345@163.com, Mention mention@noreply.github.comSubject: Re: [songdejia/Siamese-RPN-pytorch] what's your accuracy (#13)@HelloRicky123 Hi, I modified your loss function. In my opinion, they could train the tracker on the testing dataset because there are too many successful cases on difficult frames that cannot be explained.
—You are receiving this because you were mentioned.Reply to this email directly, view it on GitHub, or mute the thread. {"api_version":"1.0","publisher":{"api_key":"05dde50f1d1a384dd78767c55493e4bb","name":"GitHub"},"entity":{"external_key":"github/songdejia/Siamese-RPN-pytorch","title":"songdejia/Siamese-RPN-pytorch","subtitle":"GitHub repository","main_image_url":"https://github.githubassets.com/images/email/message_cards/header.png","avatar_image_url":"https://github.githubassets.com/images/email/message_cards/avatar.png","action":{"name":"Open in GitHub","url":"https://github.com/songdejia/Siamese-RPN-pytorch"}},"updates":{"snippets":[{"icon":"PERSON","message":"@leeyeehoo in #13: @HelloRicky123 Hi, I modified your loss function. In my opinion, they could train the tracker on the testing dataset because there are too many successful cases on difficult frames that cannot be explained. "}],"action":{"name":"View Issue","url":"https://github.com/songdejia/Siamese-RPN-pytorch/issues/13#issuecomment-457807515"}}} [ { "@context": "http://schema.org", "@type": "EmailMessage", "potentialAction": { "@type": "ViewAction", "target": "https://github.com/songdejia/Siamese-RPN-pytorch/issues/13#issuecomment-457807515", "url": "https://github.com/songdejia/Siamese-RPN-pytorch/issues/13#issuecomment-457807515", "name": "View Issue" }, "description": "View this Issue on GitHub", "publisher": { "@type": "Organization", "name": "GitHub", "url": "https://github.com" } } ]
So, can you provide your accuracy on OTB or VOT? I reimplement the paper and use the same network with you, without imagenet pretrain or Youtube-BB dataset and get at most 0.52 auc on OTB2013.