Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
i want to run this model on android device together with other models in real time.
The inference time is very close to my goal, but I wish I could speed it up a bit more.
So I'm wondering if I could change the input and output layer from 256 to 192 in the pretrained model without breaking it? do you think this is possible?
i want to run this model on android device together with other models in real time. The inference time is very close to my goal, but I wish I could speed it up a bit more.
So I'm wondering if I could change the input and output layer from 256 to 192 in the pretrained model without breaking it? do you think this is possible?