DenisTome / Lifting-from-the-Deep-release

Implementation of "Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image"
https://denistome.github.io/papers/lifting-from-the-deep
GNU General Public License v3.0
450 stars 133 forks source link

Inference speed - too slow? #13

Closed atradev closed 6 years ago

atradev commented 6 years ago

Hi, I was trying to run the demo on a MacBook Pro 2016 base model and the inference_pose takes about 86 seconds per image. I was wondering if this is to be expected? p.s. Going through the network defined in the CPM.py file, it looks like there's a number of conv_2d's happening not followed by pooling, so I imagine the resulting calculations are quite heavy. p.s.2 I re-ran it using an optimised build of TensorFlow and the inference_pose ran in ~30 seconds.

DenisTome commented 6 years ago

I agree, the inference time is large. For the purpose of the demo we have taken the convolutional pose machine (CPM) model provided by the authors and converted in tensorflow. The fact that there are few pool layers is due to the CPM architecture that we have used. You could use your own 2D pose estimator and feed the 2D key-points to our 3D lifter, if you have something faster. The bottleneck is definitely in the initial 2D pose estimation.

GajjarMihir commented 6 years ago

@atradev

Can you please share the code which takes ~30 seconds. I am trying to optimize this code. Your help will greatly be appreciated.

Thanking You, Mihir Gajjar