VisionLearningGroup / caption-guided-saliency

Supplementary material to "Top-down Visual Saliency Guided by Captions" (CVPR 2017)
https://visionlearninggroup.github.io/caption-guided-saliency/
BSD 2-Clause "Simplified" License
107 stars 35 forks source link

standard visualization for Flickr30k #7

Closed xingtengfei closed 6 years ago

xingtengfei commented 7 years ago

hello I did not find the code of standard visualization for Flickr30k. Could you give me some information about it.When I was training Flickr30k dataset, there was ResourceExhaustedError, two K20 can not meet the experimental requirements?I will be grateful for your help.

ramanishka commented 7 years ago

hi 5GBs? Why two? The code doesn't use multi GPU setup in any way.

Yes, it might be so. I don't remember exactly the requirements but try to decrease the batch size. Also, note that there is https://github.com/VisionLearningGroup/caption-guided-saliency/blob/3f4cdc1e276eb65b69a5f679f4d066f8416f769c/s2vt_model.py#L3

I'm quite busy right now, sorry. Maybe in a few weeks. Btw it should be quite straightforward to figure out how to visualize it using msr-vtt visualization.

xingtengfei commented 7 years ago

I'm so sorry to bother you.When I was training MSR-VTT, the code was working.I've decreased batch_size,it does not work. default

ramanishka commented 7 years ago

No worries. I would expect that at least twice more memory is needed for Flickr30k model (simply because of explicitly unrolled LSTM). Try to decrease batch size and/or LSTM hidden state size (in config).

xingtengfei commented 7 years ago

You are a genius.thank you very much for your help.

xingtengfei commented 7 years ago

I've implemented standard visualization for Flickr30k based on your code, but I do not know if it's completely correct, can you check it out? visualization.zip

ramanishka commented 6 years ago

@xingtengfei Sorry, the archive seems to be broken, I cannot open it. Could you check?

xingtengfei commented 6 years ago

I created a new file visua_flickr30k

ramanishka commented 6 years ago

@xingtengfei Did you forget to attach it? :)

xingtengfei commented 6 years ago

I am stupid. haha,I forgot to pull requests

xingtengfei commented 6 years ago

my code is ok?

ramanishka commented 6 years ago

Sorry, I didn't have time to look into it. I'll do it tomorrow.

ramanishka commented 6 years ago

I looked through the code. It seems you've tried to keep the code for Flickr30k to be as close as possible to the one we published for MSR-VTT. Thanks for that but here's what we need to take into account:

First of all notice the discrepancy in frame preprocessing in training and visualization (current code):

  1. Training: the shorter side is scaled to 256px then 224px center crop is taken (it is done to mimic previously published papers)
  2. Visualization: frames are scaled to 400x300 (frame extraction) then scaled again to 299x299 (without preserving aspect ratio). The reason for this - to get 'saliency map' for the whole frame not only for the central part. For visualization purposes, we extract features again from all frames in specified input video. Thus, resulting dimensionality is Tx8x8x2048 (training dimensions are 26x1x2048, 8x8 feature maps are mean pooled as in InceptionV3 layer). In your case, you don't extract features again so the image should be properly cropped/scaled.

For Flickr30k training is performed using 8x8x2048 feature maps for every frame directly by unrolling them into 64x2048 'time sequence' as it is described in the paper.

Look into this gist based on your version of my code (see notes about 'normalization' in L170-180): https://gist.github.com/ramanishka/ccf59b400d99e6aac452f50952525e2f

I'm closing this issue.