isl-org / MiDaS

Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"
MIT License
4.27k stars 599 forks source link

torch.device("cuda")? #183

Open ford442 opened 1 year ago

ford442 commented 1 year ago

I have been testing around loading models at float16 via CUDA and by cpu. Is it best to use map_location=torch.device("cpu")? Or can we use torch.device("cuda:0")? Was MIDAS trained on cpu making this a situation where map_location='cpu' is needed?

I also have been attempting to run MiDaS on CPU to save VRAM for other models. It runs very slow though. Does anyone have advice about optimizing that way? I could swear that I see better results in 32bit mode anyhow...?

Thanks for your input! :)

ford442 commented 1 year ago

Is MiDaS always supposed to use .half()? float16 for everything?