SSRSGJYD / NeuralTexture

Unofficial implementation of the paper "Deferred Neural Rendering: Image Synthesis using Neural Textures" in Pytorch.
266 stars 36 forks source link

RuntimeError: tensor size mismatch #4

Open fastcode3d opened 4 years ago

fastcode3d commented 4 years ago

When I run train.py, line 52 in pipeline.py: "x [:, 3:12,:,:] = x [:, 3:12,:,:] * basis [:,:]" reports an error:

RuntimeError: The size of tensor a (512) must match the size of tensor b (9) at non-singleton dimension 3

Then I print out the corresponding dimension. basis.shape torch.Size([32, 9]) x [:, 3:12,:,:].shape torch.Size([32, 9, 512, 512]) What's the problem?

SSRSGJYD commented 4 years ago

It is a bug caused by Pytorch broadcasting semantics. It should be:

basis = basis.view(basis.shape[0], basis.shape[1], 1, 1)
x [:, 3:12,:,:] = x [:, 3:12,:,:] * basis
fastcode3d commented 4 years ago

Thank you for your reply. I just tried to transform the dimension, but the operation still reported an error:

Traceback (most recent call last): File "train.py", line 140, in main() File "train.py", line 129, in main loss.backward() File "/lib/python3.7/site-packages/torch/tensor.py", line 102, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/lib/python3.7/site-packages/torch/autograd/init.py", line 90, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

Is this the cause of the torch version? I install according to the requirements

szulm commented 3 years ago

the same question

Thank you for your reply. I just tried to transform the dimension, but the operation still reported an error:

Traceback (most recent call last): File "train.py", line 140, in main() File "train.py", line 129, in main loss.backward() File "_/lib/python3.7/site-packages/torch/tensor.py", line 102, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "_/lib/python3.7/site-packages/torch/autograd/init.py", line 90, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

Is this the cause of the torch version? I install according to the requirements

SSRSGJYD commented 3 years ago

This error occurs because of inplace operation. To solve the problem, instead of allocating new values to x, you can create a new variable.