Closed marcotcr closed 6 years ago
You should certainly try with torch 0.3
I did not try this code with torch 0.4
Thanks for the quick reply. I tried it with torch 0.3, and it works. However, the pretrained model answers 'yes' to pretty much any question for all of the images I tried.
I think I messed with dictionnary.items() somewhere. I know that the order can change depending on your system.
I uploaded my preprocessed files. Please verify that my wid_to_word.pickle
and ans_to_aid.pickle
dictionaries are the same. If not, you these preprocessed data instead of yours.
wget http://data.lip6.fr/cadene/vqa.pytorch/vqa2/nans,2000_maxlength,26_minwcount,0_nlp,mcb_pad,right_trainsplit,train.tar.gz
wget http://data.lip6.fr/cadene/vqa.pytorch/vqa2/nans,2000_maxlength,26_minwcount,0_nlp,mcb_pad,right_trainsplit,trainval.tar.gz
To be clear, these are files that should be inside data/vqa2/processed
, right?
If so, they were different. Replacing them did not change much though - the model still answers Yes to most questions, sometimes it now answers No.
Yes inside data/vqa2/processed.
This behavior is not normal :/ You should certainly train your own model.
I am sorry but I can't help you with that. I am currently working on a better implementation. It should be out soon.
ok, thanks.
When trying to run the demo,
model(visual, question)
fails, yielding:If I change line 134 to:
lengths = [max_length - input.data.eq(0).sum(1).squeeze())]
I don't get the error anymore, but I only get nonsensical results with the pretrained model, for different images, I don't know if this is related or not.Using torch 0.4.