Open baibizhe opened 1 year ago
I'm seeing this same issue
Have you found a solution?
I'm seeing this same issue
Have you found a solution?
Not yet
I was able to run inference on a GPU with less VRAM than yours with
with torch.no_grad():
depth_pred = zerodepth_model(rgb, intrinsics)
Hello. I am try to inference with
zerodepth_model = torch.hub.load("TRI-ML/vidar", "ZeroDepth", pretrained=True, trust_repo=True)
However , it is only possible if I resize input image to a extreme small size.For example, 144,256. If the image size is 640x360. There will be a OOM of GPU. I run all my experiments on A100 40G., Is this normal?Best regards, Bizhe