rdroste / unisal

Unified Image and Video Saliency Modeling (ECCV 2020)
https://arxiv.org/abs/2003.05477
Apache License 2.0
128 stars 36 forks source link

Evaluation Metric #5

Closed silent357 closed 3 years ago

silent357 commented 3 years ago

Hi, thank you for sharing your excellent work. I have a question when I run the code. I found that the evaluation value is calculated by the average over all the videos instead of all the frames of all the videos. This makes the evaluation value higher than that calculated by the metrics used in other papers. Is my understanding right?

rdroste commented 3 years ago

Thanks for the question. We ensure that our evaluation is equivalent to prior work by computing the average over all videos weighted by the respective number of frames. This results in the same scores as computing the average over all frames of all videos. See this line in the code: https://github.com/rdroste/unisal/blob/46661a6a617c5252592c9fa51a09af482dfeac70/unisal/train.py#L776. So our implementation of the metrics produces the same results as the implementations of other papers. Let me know if you have any further questions.

silent357 commented 3 years ago

Got it, thank you.