ejcgt / attention-target-detection

[CVPR2020] "Detecting Attended Visual Targets in Video"
MIT License
169 stars 48 forks source link

Only one annotation per test image and different evaluation for GazeFollow and VideoAttention dataset #12

Open Frandre opened 2 years ago

Frandre commented 2 years ago

Dear authors,

Thanks for sharing your code and data.

I found that:

  1. Though claimed two annotations are available on each test image, it seems that in the released annotation, we have only one annotation per-image. May I ask where we can download the full annotations on your test set?
  2. In your released code, you use different methods (to compute AUC) for Gazefollow and VideoAttention dataset. For instance, on GazeFollow you use the original annotations (10 points) to compute multi-hot vector. On your own dataset, you put a Gaussian on top of the only one annotation, set all values that greater than 0 to 1 and then use such binary map as the multi-hot vector. But in your paper, you only define AUC once. Could you please confirm whether there are two different versions of AUC used in your paper or not?

Cheers, Yu