I try use some image from other dataset to test the prediction.
I use a image from nuscenes-dataset which the size is 9001600(hw) but when I run the inference ,
there is an error:
/content/drive/My Drive/AdaBins/models/layers.py in forward(self, x)
17 embeddings = self.embedding_convPxP(x).flatten(2) # .shape = n,c,s = n, embedding_dim, s
18 # embeddings = nn.functional.pad(embeddings, (1,0)) # extra special token at start ?
---> 19 embeddings = embeddings + self.positional_encodings[:embeddings.shape[2], :].T.unsqueeze(0)
20
21 # change to S,N,E format required by transformer
RuntimeError: The size of tensor a (1400) must match the size of tensor b (500) at non-singleton dimension 2
but when resize this to the shape of kitti-dataset or to 352*704,it works
do you have any suggestion ,which size should I choose
I try use some image from other dataset to test the prediction. I use a image from nuscenes-dataset which the size is 9001600(hw) but when I run the inference , there is an error: /content/drive/My Drive/AdaBins/models/layers.py in forward(self, x) 17 embeddings = self.embedding_convPxP(x).flatten(2) # .shape = n,c,s = n, embedding_dim, s 18 # embeddings = nn.functional.pad(embeddings, (1,0)) # extra special token at start ? ---> 19 embeddings = embeddings + self.positional_encodings[:embeddings.shape[2], :].T.unsqueeze(0) 20 21 # change to S,N,E format required by transformer
RuntimeError: The size of tensor a (1400) must match the size of tensor b (500) at non-singleton dimension 2
but when resize this to the shape of kitti-dataset or to 352*704,it works do you have any suggestion ,which size should I choose