mit-han-lab / litepose

[CVPR'22] Lite Pose: Efficient Architecture Design for 2D Human Pose Estimation
https://hanlab.mit.edu
MIT License
307 stars 37 forks source link

Jetson Nano inference speed is not same #20

Open Kangjik94 opened 2 years ago

Kangjik94 commented 2 years ago

hello, I tested your COCO and CROWDPOSE path.tar files using litepose/valid.py

but in my experience result, when using COCO trained LightPose-Auto-S, inference speed was 2 FPS.

is there some ways to speed up inference speed on Jetson Nano?

or...did I missed something? (like converting torch models to tvm)

when I tested litepose/nano_demo/start.py, using weight , FPS was almost 7.

Kangjik94 commented 2 years ago

if I have to convert torch model to tvm (or tensorRT), would you tell me some advices?

lmxyy commented 1 year ago

We have released the code for running our model on Jetson Nano with pre-built TVM binary in nano_demo. To convert the torch model to TVM binary, you may need to check the TVM Auto Scheduler Toturial.