Closed iqbal-chowdhury closed 7 years ago
It looks like you might be using ResNet or GoogleNet for features (size 2048), and code is expecting VGGNet - 4096. Just change the code to reflect the sizes of image features.
arguments.py file under the utils folder has configuration like this:
parser.add_argument('-img_vec_dim' , type=int , default=2048) parser.add_argument('-img_features' , type=str , default='resnet') parser.add_argument('-img_normalize' , type=int , default=0)
When the img_vec_dim is changed to 4096 it shows that
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip-4rPeHA-build/h5py/_objects.c:2684) File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip-4rPeHA-build/h5py/_objects.c:2642) File "/home/iqbal/.local/lib/python2.7/site-packages/h5py/_hl/group.py", line 166, in getitem oid = h5o.open(self.id, self._e(name), lapl=self._lapl) File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip-4rPeHA-build/h5py/_objects.c:2684) File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip-4rPeHA-build/h5py/_objects.c:2642) File "h5py/h5o.pyx", line 190, in h5py.h5o.open (/tmp/pip-4rPeHA-build/h5py/h5o.c:3570) ValueError: Not a location (Invalid object id)
ValueError: Not a location (Invalid object id)
Hmm.. Looking at the error it looks like incompatibility with the input file. What are your hdf5 data files ? It is possible that VQA_LSTM_CNN repo has changed the format of the data, on which this code depends.
If you are looking for VQA starter code (without too much dependency like mine) in Python, following are good too ---
If you want State of the art models, then most of them are in lua (torch). Let me know, I can share.
Hi,
I am going to create the h5 file from VQA_LSTM_CNN properly. Also please share state of the art models implemented in lua (torch). Thanks for the links.
VQA - [SOTAs]
More details here https://github.com/jnhwkim/awesome-vqa
Hi,
I am getting this error while running: python train.py -model DeeperLSTM or python train -model simple_mlp