Closed JackjackFan closed 2 years ago
Hi,
Thanks for asking.
For LaSOT_depth , I have added the link in the page, the generated depth maps for LaSOT (Part01 - Part10).
For COCO and Got10K, the files are too large and I can not find the place to upload it or takes so long time . It is better to generate it using the code of HighResDepth or DenseDepth. You can find the link or information on my page.
We selected the depth maps manually, it is mainly based on the quality of bonding box regions. We don’t remove too much from Got10K and COCO, it is better to check the generated depth images. I recommend the HighResDepth. :)
For our training set, I am discussing with my supervisor about the rest sequences. I may publish all sequences after next week. Sorry for this.
I hope it will helps.
On 12. Nov 2021, at 13.10, Fish Jack @.***> wrote:
Could you please provide your training datasets?..COCO_depth and Lasot_depth datasets.Thank you very much...
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/xiaozai/DeT/issues/1, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACX36HR7NGFEOBBJPI4W5DLULTYZ3ANCNFSM5H4S4ZLA. Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
In my impression, LaSOT has bigger size of RGB data than got10k and COCO. Why are their depth maps larger than LaSOT's?
Look forward to your rest training sequences and test sequences uploading!!
@laisimiao @JackjackFan Yes, I will upload all training sequences, including the generated data and the 150 training seqs before Monday! :)
Thanks for your reply and time. Best wishes!
For got10k generated depth image, it's painful to download by clicking one by one(emo). Do you have any suggestions?
I will merge all files into one big zip. I will update tonight.
@xiaozai Cool, you are a nice guy!
@laisimiao Done!
Thanks! My disks see the generated GOT10K and say "Why are you so big(LOL)?" BTW, I noticed you didn't mention generated GOT10K usage in your paper(only lasot and coco), have you added them for pretaining and if so what are the results(better)?
@laisimiao In our DeT (ICCV2021), we just used the LaSOT and COCO for pertaining while in the DOT (BMVC2021) we used LaSOT, Got10K, and COCO. We did not check the quality of Got10k and COCO depth maps. I think the results will be slightly better if you use more and high quality depth training data. I think maybe you can try some other monocular depth estimation methods to improve the quality of depth images, it will help. I am also trying some other methods, for example the HighResDepth (CVPR2021), which is better than DenseDepth. BTW, it is better you generate the depth images by yourself, then you can check them and delete the bad ones.
@laisimiao our original idea is to provide a large training set and to ask people to put more attention on depth cues. And yes if you train on the DepthTrack train set and test one our test set, the performance will be improved because of the similar scenarios and objects and the quality of images. Our DeT-DiMP is just a short-term tracker but performs well on CDTB as well, so if there is some long-term setting, maybe the performance will be higher. We still focus on the CDTB, STC and PTB. And DepthTrack test set is used to be supplementary set to CDTB in the next VOT challenge (maybe merging with CDTB and removing some bad sequences of both CDTB and DepthTrack).
Okay, I will look through some monocular depth estimation methods and check the quality of depth images on my own. Before that, may I ask is the criterion to get rid of bad depth seqs by visualization(a main qualitative method)?
@laisimiao For LaSOT, I checked the sequences one by one by visualisation and it took many hours, I focus on the shape of the target. I don't have too good suggestion to get rid of bad seqs. It is better to keep the depth consistency of the target. and I just check the 4*times regions entered at the target for training.
@xiaozai So far I appreciate your open work and would like follow it further more.
@laisimiao Thanks! Good luck and hope the RGBD and D tracking goes further!
Could you please provide your training datasets?..COCO_depth and Lasot_depth datasets.Thank you very much...