Closed xingtengfei closed 6 years ago
hi 5GBs? Why two? The code doesn't use multi GPU setup in any way.
Yes, it might be so. I don't remember exactly the requirements but try to decrease the batch size. Also, note that there is https://github.com/VisionLearningGroup/caption-guided-saliency/blob/3f4cdc1e276eb65b69a5f679f4d066f8416f769c/s2vt_model.py#L3
I'm quite busy right now, sorry. Maybe in a few weeks. Btw it should be quite straightforward to figure out how to visualize it using msr-vtt visualization.
I'm so sorry to bother you.When I was training MSR-VTT, the code was working.I've decreased batch_size,it does not work.
No worries. I would expect that at least twice more memory is needed for Flickr30k model (simply because of explicitly unrolled LSTM). Try to decrease batch size and/or LSTM hidden state size (in config).
You are a genius.thank you very much for your help.
I've implemented standard visualization for Flickr30k based on your code, but I do not know if it's completely correct, can you check it out? visualization.zip
@xingtengfei Sorry, the archive seems to be broken, I cannot open it. Could you check?
I created a new file visua_flickr30k
@xingtengfei Did you forget to attach it? :)
I am stupid. haha,I forgot to pull requests
my code is ok?
Sorry, I didn't have time to look into it. I'll do it tomorrow.
I looked through the code. It seems you've tried to keep the code for Flickr30k to be as close as possible to the one we published for MSR-VTT. Thanks for that but here's what we need to take into account:
First of all notice the discrepancy in frame preprocessing in training and visualization (current code):
For Flickr30k training is performed using 8x8x2048 feature maps for every frame directly by unrolling them into 64x2048 'time sequence' as it is described in the paper.
Look into this gist based on your version of my code (see notes about 'normalization' in L170-180): https://gist.github.com/ramanishka/ccf59b400d99e6aac452f50952525e2f
I'm closing this issue.
hello I did not find the code of standard visualization for Flickr30k. Could you give me some information about it.When I was training Flickr30k dataset, there was ResourceExhaustedError, two K20 can not meet the experimental requirements?I will be grateful for your help.