shariqfarooq123 / AdaBins

Official implementation of Adabins: Depth Estimation using adaptive bins
GNU General Public License v3.0
727 stars 156 forks source link

is there any requirement of input image size by prediction(inference) #12

Closed JianWu313 closed 3 years ago

JianWu313 commented 3 years ago

I try use some image from other dataset to test the prediction. I use a image from nuscenes-dataset which the size is 9001600(hw) but when I run the inference , there is an error: /content/drive/My Drive/AdaBins/models/layers.py in forward(self, x) 17 embeddings = self.embedding_convPxP(x).flatten(2) # .shape = n,c,s = n, embedding_dim, s 18 # embeddings = nn.functional.pad(embeddings, (1,0)) # extra special token at start ? ---> 19 embeddings = embeddings + self.positional_encodings[:embeddings.shape[2], :].T.unsqueeze(0) 20 21 # change to S,N,E format required by transformer

RuntimeError: The size of tensor a (1400) must match the size of tensor b (500) at non-singleton dimension 2

but when resize this to the shape of kitti-dataset or to 352*704,it works do you have any suggestion ,which size should I choose

shariqfarooq123 commented 3 years ago

Duplicate of #5