Closed SzuPc closed 4 months ago
At a glance, the code looks good to me. The codebase here is a re-write of that in the paper, so there are bound to be some differences (along with a less filtered dataset). We also average over number of tracks, not videos in the paper from what I remember. Both of these together could add to a factor of difference. That all said, a factor of ~2 is large, so I'll have to dig up the old code and look through it (might be something like a scale change from initial data to that which we released).
For now, the relative ordering from your figure looks good (as in MFT<CSRT<Control), so evaluating comparative performance should be fine.
Regarding SENDD, we will be unable to release the code for this framework, but we note that since SENDD is a framewise tracker (scene flow) without drift correction, we expect methods that re-localize on top of framewise trackers, such as MFTs, to outperform on this challenge.
I'm going to close this for now since there are other tasks that are taking up my time for the STIR challenge preparation in the meantime.
Thank you for your last reply. I successfully reproduced the endpointerror values of Control and CSRT. The trend of the curve is similar to the figure in the SIRT paper, but the value is 2.2-2.3 times higher than that in the paper. I think Do you know if other processing has been done? Below is my code for numerical processing.
This is the experimental result I ran. I also want to ask if you will disclose the code of the SENDD model. If so, it will be of great significance for us to participate in the challenge.