facebookresearch / sapiens

High-resolution models for human tasks.
https://about.meta.com/realitylabs/codecavatars/sapiens/
Other
4.53k stars 258 forks source link

Inference time on nvidia V100 too long #147

Closed rabum closed 1 month ago

rabum commented 1 month ago

When I used nvdia 4090 to run inference (not lite version ) the speed was around 0.4s per frame, then I changed to V100 it's around 1.3s per frame, what caused such big difference and how can I improve it?

chopin1998 commented 1 month ago

V100 elder than 2080Ti, vs 4090 ?

rabum commented 1 month ago

V100 elder than 2080Ti, vs 4090 ?

The speed gap is no that big when I used yolo model, it's specifically big on the pose model, I wonder what causes it

rawalkhirodkar commented 1 month ago

@rabum the native model inference for pose is slow due to post-processing. The visualization and save to disk is very comprehensive and takes a while.