mit-han-lab / temporal-shift-module

[ICCV 2019] TSM: Temporal Shift Module for Efficient Video Understanding
https://arxiv.org/abs/1811.08383
MIT License
2.07k stars 417 forks source link

How to speed up for online model on Jetson Nano(only 0.7vid/s)? #166

Open Amazingren opened 3 years ago

Amazingren commented 3 years ago

Hi Lin,

Thanks for your impressive work. However, when I want to have a try of your online model on Jetson Nano. It shows only 0.7 vid/s, extremely slow. And I do have followed your guidance. And my environment is as follows:

Is there any operation for me to speed up the recognition speed?

I am looking forward to your reply.

Best wishes, Bin

chongyangwang-song commented 3 years ago

@Amazingren I try it on Jeston TX2, it shows 1.6vid/s.I think we meet the same problem,if you have fixed your problem,please tell me,many thanks

chongyangwang-song commented 3 years ago

@Amazingren I skip onnx and tvm,use torch for inference directly.it shows 20vid/s

Amazingren commented 3 years ago

@Amazingren I skip onnx and tvm,use torch for inference directly.it shows 20vid/s

Cool! Thanks for your advice, I will try this way later!

hoangminhtoan commented 3 years ago

@Amazingren I skip onnx and tvm,use torch for inference directly.it shows 20vid/s

Hi, could you provide example code without tvm?

xiaoxingf commented 3 years ago

@Amazingren I skip onnx and tvm,use torch for inference directly.it shows 20vid/s

Hi, could you provide example code without tvm?

I have the code without tvm,do you need now/

MyungHwanSung commented 3 years ago

@Amazingren I skip onnx and tvm,use torch for inference directly.it shows 20vid/s

Hi, could you provide example code without tvm?

I have the code without tvm,do you need now/

Can I get the code? I would be very grateful if you could do that.

xiaoxingf commented 3 years ago

The code format seems to have changed when I paste it on github, try to check the difference between code and above screenshot .And my code run on a ubuntu pc, maybe a little difference on nvidia board.

My code operating environment is python=3.6.13, pytorch = 1.4.0,numpy=1.19.2 .

At 2021-05-21 16:23:08, "MyungHwanSung" @.***> wrote:

I really apropriciate for you but some error still occur ..

python3 online_demo/main.py Open camera... [ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src0 reported: Internal data stream error. [ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (886) open OpenCV | GStreamer warning: unable to start pipeline [ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created <VideoCapture 0x7f171a1110> Build transformer... /home/ice/.local/lib/python3.6/site-packages/torchvision-0.7.0-py3.6-linux-aarch64.egg/torchvision/transforms/transforms.py:257: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead. "please use transforms.Resize instead.") Traceback (most recent call last): File "online_demo/main.py", line 290, in main() File "online_demo/main.py", line 201, in main transform = get_transform() File "online_demo/main.py", line 113, in get_transform GroupNormalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) TypeError: object() takes no parameters

Thanks for help

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

xiaoxingf commented 3 years ago

1 2 3 4 5 6

MyungHwanSung commented 3 years ago

I soloved problem thanks to you Thanks !!!!!!!!!!!!! but only 0.2vid/s T.T

waduhekx commented 3 years ago

@Amazingren I skip onnx and tvm,use torch for inference directly.it shows 20vid/s

Can you share your demo code which don't use tvm and oxnn with me? Thank you.