/home/mmc_xhma/software/anconda3/bin/python3.6 /home/mmc_xhma/code/TMM_2017/pytorch-vqa-master/preprocess-images.py
/home/mmc_xhma/software/anconda3/lib/python3.6/site-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
/home/mmc_xhma/software/anconda3/lib/python3.6/site-packages/torchvision-0.2.0-py3.6.egg/torchvision/transforms/transforms.py:156: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
found 82783 images in mscoco/train2014
found 40504 images in mscoco/val2014
0%| | 0/123287 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/mmc_xhma/code/TMM_2017/pytorch-vqa-master/preprocess-images.py", line 79, in
main()
File "/home/mmc_xhma/code/TMM_2017/pytorch-vqa-master/preprocess-images.py", line 70, in main
out = net(imgs)
File "/home/mmc_xhma/software/anconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(*input, kwargs)
File "/home/mmc_xhma/code/TMM_2017/pytorch-vqa-master/preprocess-images.py", line 31, in forward
self.model(x)
File "/home/mmc_xhma/software/anconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(*input, *kwargs)
File "/home/mmc_xhma/software/anconda3/lib/python3.6/site-packages/torchvision-0.2.0-py3.6.egg/torchvision/models/resnet.py", line 151, in forward
File "/home/mmc_xhma/software/anconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(input, kwargs)
File "/home/mmc_xhma/software/anconda3/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 55, in forward
return F.linear(input, self.weight, self.bias)
File "/home/mmc_xhma/software/anconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 835, in linear
return torch.addmm(bias, input, weight.t())
RuntimeError: size mismatch at
preprocess_batch_size = 64
image_size =448 # scale shorter end of image to this size and centre crop
output_size = image_size // 32 # size of the feature maps after processing through a network
output_features = 2048 #2048 # number of feature maps thereof
central_fraction = 0.875 # only take this much of the centre when scaling and centre
when the parameters are set as default , the error occured.
I have checked the input of the last fc layer in resnet152 , the input shape is [64 131072], however the weight martix shape is [2048 1000] and bias is none.
File "/home/mmc_xhma/software/anconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 835, in linear return torch.addmm(bias, input, weight.t())
Obviously , the size mismatch.
How can I fix the error.
Either use the ResNet contained in the resnet directory/submodule or cut off the Linear layer, the last pooling layer, and the flattening from the ResNet that you have.
The Error Log is as following
config.py
when the parameters are set as default , the error occured.
I have checked the input of the last fc layer in resnet152 , the input shape is [64 131072], however the weight martix shape is [2048 1000] and bias is none.
File "/home/mmc_xhma/software/anconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 835, in linear return torch.addmm(bias, input, weight.t())
Obviously , the size mismatch. How can I fix the error.