Open LittlePika opened 1 month ago
Could you let me know if you're asking for a custom dataset? If so, the tags are indeed the key; you have to annotate the suitable frames.
Then, it all depends on the evaluation; some older approaches have this implemented in the toolkit. The new single-target analysis, based on anchor restarts, does not have a public implementation, but the approach is also a lot less sophisticated; we are talking about weighted sum over the sequences. If this is what you are interested in, I can look into adding the code to the toolkit, it is an extension using a lot of toolkit code.
Thank you for your fast response. The question is not about a specific data set. Let's use VOT 2018 as a common example, with a framewised labeled e.g. occulusion tag. Based on my understanding of the graphs, e.g. Figure 3 (Failure rate with respect to the visual attributes.) and Figure 7 of the accompanying VOT Challenge Report (https://prints.vicos.si/publications/365), it is possible to generate the results for the individual attributes separately for a detailed evaluation.
My question is how to obtain similar graphs directly by using the VOT Challenge? Is it possible to generate them using a flag similar to the html report?
Do I therefore understand your answer correctly that this is possible in an evaluation with the public legacy code (https://github.com/votchallenge/toolkit-legacy), but not (yet) in the toolkit? If this is part of the additional code you have mentioned, I would appreciate it very highly if this could be made available. Thank you very much.
Hello everyone,
Thank you very much for providing the toolkit. In some papers you can see the evaluation of the attributes by e.g. camera motion. In common.py the tag files are also read in. I did not see any information about this in the documentation. How can I perform the evaluation accordingly?
I would be very happy to receive an answer. Many thanks in advance.