VisionLearningGroup / caption-guided-saliency

Supplementary material to "Top-down Visual Saliency Guided by Captions" (CVPR 2017)
https://visionlearninggroup.github.io/caption-guided-saliency/
BSD 2-Clause "Simplified" License
107 stars 35 forks source link

Pre-trained model #4

Open malzantot opened 7 years ago

malzantot commented 7 years ago

Hello,

I wonder if you don't mind sharing your pre-trained model for those who are interested in trying your system without having to go through the training ?

Thanks, Moustafa

ramanishka commented 7 years ago

Hi,

Here is the tarball which should be extracted under experiments/ directory. Then, you can simply run evaluation/visualization command with --checkpoint 96:

python run_s2vt.py --dataset MSR-VTT --test --checkpoint 96

or

python visualization.py --dataset MSR-VTT --media_id video9461 --checkpoint 96 --sentence "A man is driving a car"

Please, let me know if something went wrong.