hazirbas / poselstm-pytorch

PyTorch implementation of PoseLSTM and PoseNet
Other
120 stars 36 forks source link

Using a different image size #16

Closed AntiLibrary5 closed 3 years ago

AntiLibrary5 commented 3 years ago

Hi. Thank you for your work. I've been trying to run the PoseNet model on a different dataset. The image size is 2880x2160 [HxW]. So I firest resize and save the images with the compute_mean_image.py and then use the argument --loadSize and --fineSize when running train.py to have the image square cropped to 512x512. But I get an error which I haven't been able to trace back.

But when I don't use the --fineSize argument (which I believe should be cropping to the default size of 224), the model runs fine.

CMD: python util/compute_image_mean.py --dataroot /dataroot/ --height 720 --width 540 --save_images python train.py --model posenet --dataroot /dataroot/ --name posenet/Hyundai/beta500 --beta 500 --gpu 0 --loadSize 540 --fineSize 512

ERROR: posenet initializing the weights from pretrained_models/places-googlenet.pickle ---------- Networks initialized ------------- model [PoseNetModel] was created Traceback (most recent call last): File "train.py", line 38, in <module> model.optimize_parameters() File "/home/varora/PythonProjects/posenet/poselstm-pytorch/models/posenet_model.py", line 101, in optimize_parameters self.forward() File "/home/varora/PythonProjects/posenet/poselstm-pytorch/models/posenet_model.py", line 76, in forward self.pred_B = self.netG(self.input_A) File "/home/varora/conda4.3.30/envs/posenet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/varora/PythonProjects/posenet/poselstm-pytorch/models/networks.py", line 219, in forward return self.cls1_fc(output_4a) + self.cls2_fc(output_4d) + self.cls3_fc(output_5b) File "/home/varora/conda4.3.30/envs/posenet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/varora/PythonProjects/posenet/poselstm-pytorch/models/networks.py", line 106, in forward output = self.cls_fc_pose(output.view(output.size(0), -1)) File "/home/varora/conda4.3.30/envs/posenet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/varora/conda4.3.30/envs/posenet/lib/python3.8/site-packages/torch/nn/modules/container.py", line 119, in forward input = module(input) File "/home/varora/conda4.3.30/envs/posenet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/varora/conda4.3.30/envs/posenet/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 94, in forward return F.linear(input, self.weight, self.bias) File "/home/varora/conda4.3.30/envs/posenet/lib/python3.8/site-packages/torch/nn/functional.py", line 1753, in linear return torch._C._nn.linear(input, weight, bias) RuntimeError: mat1 dim 1 must match mat2 dim 0


Any help is appreciated. Thank you.

hazirbas commented 3 years ago

Hey, linear layer cannot operate with arbitrary size. You may want to load the model, convert the linear layer to a convolutional layer and then use your custom size.