samyak0210 / ViNet

ViNet Pushing the limits of Visual Modality for Audio Visual Saliency Prediction
MIT License
64 stars 19 forks source link

Evaluation metrics and code #11

Open kayleeliyx opened 3 years ago

kayleeliyx commented 3 years ago

I was trying to evaluate a model after training. I noticed that they didn't release the ground truth labels of the test dataset.

In the evaluation code provided by https://mmcheng.net/videosal/

I found the comments are "if the ground truth cannot be found, e.g. testing data, the central gaussian will be taken as ground truth automatically."

However, the real code is:

if exist(saliency_path, 'file')
       I = double(imread(saliency_path))/255;
       allMetrics(i) = fh( result, I);
else      
       allMetrics(i) = nan;
end

Then in the end,

allMetrics(isnan(allMetrics)) = [];
meanMetric = mean(allMetrics);

I'm wondering for test set without ground truth, how to generate "central gaussian "

Another question is, for the numbers listed on the board https://mmcheng.net/videosal/, are they tested on validation set or test set?

Thanks a lot for your help!

chhanganivarun commented 1 year ago

The evaluation code shared on the repository wasn't used. The original Matlab files shared in your link were only used. Further, the test results are shared as it is by the challenge handlers.