mit-han-lab / litepose

[CVPR'22] Lite Pose: Efficient Architecture Design for 2D Human Pose Estimation
https://hanlab.mit.edu
MIT License
306 stars 37 forks source link

Running Video Capture Demo on Windows PC #13

Open Greendogo opened 2 years ago

Greendogo commented 2 years ago

Hey there, could you provide some instruction on setting up and running inference using a webcam on a Windows PC?

I'm getting stuck at the 'tvm' part.

sushil-bharati commented 2 years ago

Yes, this would be very helpful for testing the algorithm on various devices running Windows. It would also be better if you could give more information on how one could enable/disable GPU device(s). Thanks

kevkid commented 2 years ago

Same here, using the jetson demo, it states ModuleNotFoundError: No module named 'tvm'

lmxyy commented 2 years ago

The nano_demo is tested on Jetson Nano with TVM support. If you are using Jetson Nano, you could follow this guide to install TVM. If you are using other devices, @MemorySlices could you adapt the TVM demo to a PyTorch model one for a more general demo?

sushil-bharati commented 2 years ago

@lmxyy Do you know if the models are CPU friendly? Do we "require" GPU to run them optimally? I tried it in my CPU-only environment and it takes ~1.96 sec to process a frame (448x448x3). Am I doing sth wrong?

lmxyy commented 2 years ago

The model should be CPU-friendly, as we also include some results of Raspberry Pi and it only takes ~100ms. But if you directly run the PyTorch model using CPU, I think your result is reasonable, as the CPU backend is not well-optimized.

sushil-bharati commented 2 years ago

Thank you, @lmxyy for the prompt response. That explains why I am getting such a slow speed. I am indeed using model(s) using Pytorch's CPU backend settings. So, is there a way that I can run the optimized model(s) on a CPU-only env, or is that out of scope?

lmxyy commented 2 years ago

You could try TVM to optimize your CPU backend. But I think this will cost your much more time...

kevkid commented 2 years ago

Hi @sushil-bharati would it be possible to share how you got it to run using the pytorch cpu backend? I tried doing model(img)and got:

conv2d() received an invalid combination of arguments - got (numpy.ndarray, Parameter, NoneType, tuple, tuple, tuple, int), but expected one of:
 * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups)
      didn't match because some of the arguments have invalid types: (numpy.ndarray, Parameter, NoneType, tuple, tuple, tuple, int)
 * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups)
      didn't match because some of the arguments have invalid types: (numpy.ndarray, Parameter, NoneType, tuple, tuple, tuple, int)

Thank you

731076467 commented 2 years ago

Hello, I'd like to ask why I can't find this scheduler in the sentence from scheduler import warmup designer in dist_train file. What's the reason?

MemorySlices commented 2 years ago

Hi, Please ignore it and delete the corresponding import.

Best, Yihan

731076467 @.***> 于 2022年8月2日周二 上午11:03写道:

Hello, I'd like to ask why I can't find this scheduler in the sentence from scheduler import warmup designer in dist_train file. What's the reason?

— Reply to this email directly, view it on GitHub https://github.com/mit-han-lab/litepose/issues/13#issuecomment-1201961015, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANUHL3R34OKYCFMODBK7RCLVXCFX3ANCNFSM54FEHHLA . You are receiving this because you were mentioned.Message ID: @.***>