phuang17 / DeepMVS

DeepMVS: Learning Multi-View Stereopsis
https://phuang17.github.io/DeepMVS/index.html
BSD 2-Clause "Simplified" License
331 stars 85 forks source link

Permute image dimensions before normalizing #15

Closed rasmus25 closed 5 years ago

rasmus25 commented 5 years ago

to avoid error

Successfully created VGG model.
Start working on image 0/43.
Traceback (most recent call last):
  File "python/test.py", line 147, in <module>
    VGG_tensor = Variable(VGG_normalize(torch.FloatTensor(ref_img_full)).permute(2, 0, 1).unsqueeze(0), volatile = True)
  File "/home/rasmus/anaconda2/envs/pytorch_p27/lib/python2.7/site-packages/torchvision/transforms/transforms.py", line 164, in __call__
    return F.normalize(tensor, self.mean, self.std, self.inplace)
  File "/home/rasmus/anaconda2/envs/pytorch_p27/lib/python2.7/site-packages/torchvision/transforms/functional.py", line 208, in normalize
    tensor.sub_(mean[:, None, None]).div_(std[:, None, None])
RuntimeError: The size of tensor a (3949) must match the size of tensor b (3) at non-singleton dimension 0

with PyTorch 0.4.0 and newer. With this fix, I can get good looking disparity images, but please comment if the fix is actually correct. Or do I need to swap the dimensions of mean and std values when creating VGG_normalize?

phuang17 commented 5 years ago

This change looks good to me. Thanks for fixing this! 😄