YvanYin / Metric3D

The repo for "Metric3D: Towards Zero-shot Metric 3D Prediction from A Single Image" and "Metric3Dv2: A Versatile Monocular Geometric Foundation Model..."
https://jugghm.github.io/Metric3Dv2/
BSD 2-Clause "Simplified" License
1.3k stars 94 forks source link

Running inference on CPU #149

Open AD-lite24 opened 3 weeks ago

AD-lite24 commented 3 weeks ago

Hi I was wondering if there was any support for CPU inferences. The sample script from hubconf.py doesn't run even if after all the code instructing tensors and models to move to cuda were removed perhaps because of some internal line which still expects CUDA

torch.autocast(device_type='cuda', dtype=torch.bfloat16, enabled=False)

in mono/model/decode_heads/RAFTDepthNormalDPTDecoder5.py

Not sure how many more such instances there are so I wanted to get it clarified. I am sure it will be difficult to run on CPU but still

elvistheyo commented 2 weeks ago

@AD-lite24 were you able to run it on CPU?

AD-lite24 commented 2 weeks ago

@elvistheyo Nope as I said it would take a lot of effort which might end up wasted anyway. Let me know if you choose to try it out though I could try to assist you with it if possible

JUGGHM commented 1 week ago

I think it will be difficult and not beneficial to infer on cpu. Approximately it will take 1.5~4 minutes to perform one inference for the ViT-L model. Additionally, one important acceleration library xformers does not support cpu as well. The type torch.bfloat16 is only supported on GPU. The data type for all tensors should be torch.float32 for cpu devices.