Closed frankfengdi closed 3 years ago
These numbers depend on the batch size and the hardware you are running inference with. I can give you these for my personal workstation for example:
GPU: NVIDIA Titan XP
Batch Size: 2
Max GPU Memory Consumption: 7712 MB
Avg. Iteration Speed: 1.82 it/s
Total Time for KITTI Val Inference: 17 min, 15 seconds
Feel free to run this yourself and see what kind of performance you get.
It's very odd. My GPU is NVIDIA RTX3090, but it took 10s for every image.
Hi @rockywind,
Are you using torch==1.4.0
? If you are using a newer torch version, the inference will take a very long time.
If you want to keep your current environment with a newer torch version, you can change this line
from:
batch_dict[key] = kornia.image_to_tensor(val).float().cuda()
to:
batch_dict[key] = kornia.image_to_tensor(val).float().cuda().contiguous()
I am working on changing this in the repo, but you can make this quick fix for now.
Thanks a lot for your help. It's work.
Great glad I can help
Really nice work! Thank you for releasing the code. I wonder how large is the network in total, and how long does it take for inference?