marcellacornia / mlnet

A Deep Multi-Level Network for Saliency Prediction. ICPR 2016
MIT License
94 stars 37 forks source link

Reproducing results when training using only SALICON dataset #16

Closed MartaCollPol closed 6 years ago

MartaCollPol commented 6 years ago

Hi @marcellacornia,

I'm trying to reproduce your results when training using only SALICON dataset and I have a couple of questions. First of all, the predictions you provide on the SALICON validation set are obtained when the model has only been trained with SALICON?

And the scripts you provide are the ones you used when training only with SALICON or when finetuning with MIT300? If I want to train only with SALICON is there anything I should change to obtain the same results as you? (I'm thinking about the need of using padding the images that maybe it is not necessary (?))

Thank you in advance!

Marta

marcellacornia commented 6 years ago

Hi @MartaCollPol, thanks for downloading our code.

Which SALICON version are you using? In 2017, a new version of this dataset was released but we did not use it in our experiments. If you want to replicate our results, you have to use the 2015 version of the SALICON dataset.

The weights we provide were obtained by training our ML-Net on the SALICON dataset only.

MartaCollPol commented 6 years ago

Oh I see, I've been using the 2017 version I'm going to change to the 2015 one. I'm interested in training the model to obtain similar results as you, for the different metrics. Therefore I don't need to use the MLnet weights you provide. On your paper its said that you fine-tuned with MIT300, that is why I was wondering if the code, like you have it published right now, is preapered for this fine tuning, or it's prepeared for training with SALICON.

marcellacornia commented 6 years ago

For the results on the MIT300 dataset, we finetuned the network, trained on the SALICON, on 900 randomly selected images of the MIT1003, as suggested by the MIT Saliency Benchmark.

The code is the same used for training the SALICON dataset. You just have to change the image paths and the number of images used for training and validation in the config.py file.

MartaCollPol commented 6 years ago

I've trained mlnet using the 2015 dataset and I'm still getting a score of 0.813 for the AUC Judd metric which is lower than the score I get when using your weights, can you confirm me that the SALICON version you used is the "previous release" in http://salicon.net/challenge-2017/ ? Or do you have any idea what could have gone wrong? (I haven't changed any parameter)

marcellacornia commented 6 years ago

Hi @MartaCollPol, sorry for the late reply.

Yes, our results were obtained using the previous release of the SALICON dataset. Which evaluation code are you using? For the SALICON dataset, we did not write our own code but we submitted the predicted maps to this CodaLab page.

MartaCollPol commented 6 years ago

Hi @marcellacornia,

I used the Python implementation of the evaluation metrics provided in the MIT saliency benchmark. I'm no longer trying to reproduce the results so I'm closing the issue, thank you for your help!