alexanderkroner / saliency

Contextual Encoder-Decoder Network for Visual Saliency Prediction [Neural Networks 2020]
MIT License
172 stars 46 forks source link

Reproducing paper results #10

Closed luciajajcay closed 3 years ago

luciajajcay commented 3 years ago

Hi, I'd just like to double-check that I'm doing what I think I'm doing. If I'd like to reproduce the results of your paper on my dataset, I need to use: python main.py test -d mit1003 -p [path to my images] - this will download/use a model first trained on SALICON and fine-tuned on MIT1003, which is what you presented in the paper. Correct? Thanks in advance!

alexanderkroner commented 3 years ago

Hey, that's right! As you said this command will use a model first trained on SALICON and then fine-tuned on MIT1003 to predict saliency maps. In the paper I also use networks fine-tuned on other datasets (cat2000, dutomron, pascals, osie) so depending on your images, you could also try one of the other versions.