Hey all... just trying to run this on some HD png images... am I missing something?
`X:\miniconda3\envs\AdaBins2\python.exe C:/Users/proscans/AdaBins/testItOut.py
Loading base model ()...Using cache found in C:\Users\proscans/.cache\torch\hub\rwightman_gen-efficientnet-pytorch_master
Done.
Removing last two layers (global_pool & classifier).
Building Encoder-Decoder model..Done.
Traceback (most recent call last):
File "C:/Users/proscans/AdaBins/testItOut.py", line 12, in
bin_centers, predicted_depth = infer_helper.predict_pil(img)
File "X:\miniconda3\envs\AdaBins2\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, kwargs)
File "C:\Users\proscans\AdaBins\infer.py", line 95, in predict_pil
bin_centers, pred = self.predict(img)
File "X:\miniconda3\envs\AdaBins2\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, *kwargs)
File "C:\Users\proscans\AdaBins\infer.py", line 106, in predict
bins, pred = self.model(image)
File "X:\miniconda3\envs\AdaBins2\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(input, kwargs)
File "C:\Users\proscans\AdaBins\models\unet_adaptive_bins.py", line 94, in forward
bin_widths_normed, range_attention_maps = self.adaptive_bins_layer(unet_out)
File "X:\miniconda3\envs\AdaBins2\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, *kwargs)
File "C:\Users\proscans\AdaBins\models\miniViT.py", line 25, in forward
tgt = self.patch_transformer(x.clone()) # .shape = S, N, E
File "X:\miniconda3\envs\AdaBins2\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(input, **kwargs)
File "C:\Users\proscans\AdaBins\models\layers.py", line 19, in forward
embeddings = embeddings + self.positional_encodings[:embeddings.shape[2], :].T.unsqueeze(0)
RuntimeError: The size of tensor a (1980) must match the size of tensor b (500) at non-singleton dimension 2
Hey all... just trying to run this on some HD png images... am I missing something?
`X:\miniconda3\envs\AdaBins2\python.exe C:/Users/proscans/AdaBins/testItOut.py Loading base model ()...Using cache found in C:\Users\proscans/.cache\torch\hub\rwightman_gen-efficientnet-pytorch_master Done. Removing last two layers (global_pool & classifier). Building Encoder-Decoder model..Done. Traceback (most recent call last): File "C:/Users/proscans/AdaBins/testItOut.py", line 12, in
bin_centers, predicted_depth = infer_helper.predict_pil(img)
File "X:\miniconda3\envs\AdaBins2\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, kwargs)
File "C:\Users\proscans\AdaBins\infer.py", line 95, in predict_pil
bin_centers, pred = self.predict(img)
File "X:\miniconda3\envs\AdaBins2\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, *kwargs)
File "C:\Users\proscans\AdaBins\infer.py", line 106, in predict
bins, pred = self.model(image)
File "X:\miniconda3\envs\AdaBins2\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(input, kwargs)
File "C:\Users\proscans\AdaBins\models\unet_adaptive_bins.py", line 94, in forward
bin_widths_normed, range_attention_maps = self.adaptive_bins_layer(unet_out)
File "X:\miniconda3\envs\AdaBins2\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, *kwargs)
File "C:\Users\proscans\AdaBins\models\miniViT.py", line 25, in forward
tgt = self.patch_transformer(x.clone()) # .shape = S, N, E
File "X:\miniconda3\envs\AdaBins2\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(input, **kwargs)
File "C:\Users\proscans\AdaBins\models\layers.py", line 19, in forward
embeddings = embeddings + self.positional_encodings[:embeddings.shape[2], :].T.unsqueeze(0)
RuntimeError: The size of tensor a (1980) must match the size of tensor b (500) at non-singleton dimension 2
Process finished with exit code 1`