Closed CILT closed 3 years ago
I must add that the error seems being raised when encoding the frame with atlasnet.py
Hi @CILT, this looks like a shape mismatch error. You may want to make sure the size of input_image
in this line is 3xHxW.
Hi! Thank you for quick response.
I've executed the following command: print(input_image.shape)
and outputs:
torch.Size([3, 1440, 1080])
With 3xHxW, are you meaning dimensions 3xHeightxWidth? Currently, my video is 3xWidthxHeight.
Should I transpose my RGB .npy array, then?
Thank you.
Please make sure the input image size matches opt.H
and opt.W
, as this looks like the main problem. And yes, it's probably a good idea to make it 3 x height x width
. Please feel free to reopen if the issue persists.
Hi, I've tested your work with a trained model, as showed in the examples. Everything worked fine using the sequences provided in
data/sequences
.Now, I'm trying to see what happens if I give a sequence created by myself, from a real video. I have encoded the video in RGB, and then I created the .npy file from it. When executing the project with the next parameters, I get a Runtime Error:
RuntimeError: mat1 dim 1 must match mat2 dim 0
.The execution parameters are:
Then, the full stack trace:
I'll appreciate any feedback, thanks.