isl-org / MiDaS

Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
MIT License
4.53k stars 626 forks source link

change the input / output size of the model #104

Open rodrigoGA opened 3 years ago

rodrigoGA commented 3 years ago

i want to run this model on android device together with other models in real time. The inference time is very close to my goal, but I wish I could speed it up a bit more.

So I'm wondering if I could change the input and output layer from 256 to 192 in the pretrained model without breaking it? do you think this is possible?

ranftlr commented 3 years ago

The network is fully-convolutional, you can change input resolution to any multiple of 32 pixels.