Daniil-Osokin / lightweight-human-pose-estimation-3d-demo.pytorch

Real-time 3D multi-person pose estimation demo in PyTorch. OpenVINO backend can be used for fast inference on CPU.
Apache License 2.0
656 stars 138 forks source link

It takes much more time to parse poses than inference of model. #25

Closed TsingWei closed 4 years ago

TsingWei commented 4 years ago

My device: nvidia jetson TX2 In my device, it took about 130ms to run parse_poses function while about 90ms on network inference. Maybe this work is writen for powerful x86 CPU instead of ARM? Because I found the part of parse_poses seems like a bunch of matrix computation(maybe?) using numpy.
Any idea to improve it?

TsingWei commented 4 years ago

And, is it the same model as here?

Hi, we did not release training code due to time constraints. I think the easiest way is to take neighbor repository and add 3D keypoints estimation branch to it.

TsingWei commented 4 years ago

Oh, after I built the pose extractor, it works.

Daniil-Osokin commented 4 years ago

Great!