Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
MIT License
4.55k
stars
634
forks
source link
swin2_tiny failed to run forward(): RuntimeError: unflatten: Provided sizes [64, 64] don't multiply up to the size of dim 2 (64) in the input tensor. #252
I don't think Unflatten(dim=2, unflattened_size=torch.Size([64, 64])) work on any of the dimensions (b, 64, 64, 96). On the other hand it seems (b, 64, 64, 96) has already been unflattened.
Did anyone tried training or inference with the swin backbones?
I tried with
During inference at https://github.com/isl-org/MiDaS/blob/bdc4ed64c095e026dc0a2f17cabb14d58263decb/midas/backbones/utils.py#L72 it gave the error
The input at this layer is of a shape (b, 64, 64, 96) where b is the batch size. The next operator
pretrained.act_postprocess1
is aI don't think
Unflatten(dim=2, unflattened_size=torch.Size([64, 64]))
work on any of the dimensions(b, 64, 64, 96)
. On the other hand it seems(b, 64, 64, 96)
has already been unflattened.Did anyone tried training or inference with the swin backbones?