Open ChinChyi opened 1 year ago
Thanks for your interest in our work! First of all, we collect the saliency maps provided by the authors or generated from public codes and evaluate their scores using the same evaluation code in our experiments. In our paper, we use ave-F score to evaluate our method as most USOD methods and keep it consistent in all experiments. As I know, most VSOD methods reported max-F score in their papers. In our implementation, we also calculate the max-F score of our method. You can easily compare the max-F score if you want.
Thank you for your response. May I ask if you could provide the test results of VSOD? I am currently working on weakly-supervised VSOD research and I would like to compare the performance with your work. Additionally, could you also provide the test results on the ViSal and VOS datasets? Thank you very much.
Thanks for your citation! Sorry that I am working on other projects right now. In our code, there are two ways you can produce the test results. 1) Using test.py as we show in readme; 2) Using deploy.py, you can generate the saliency maps with some modifications.
Can you provide the test results of the four VSOD datasets that have been tested? I am afraid that my test results may differ from yours.
Regarding the issue of video saliency performance: I noticed in your paper's section on video saliency that the F-value in the metrics has a significant difference compared to previous papers. What is the reason for this?