Open Amazingren opened 3 years ago
@Amazingren I try it on Jeston TX2, it shows 1.6vid/s.I think we meet the same problem,if you have fixed your problem,please tell me,many thanks
@Amazingren I skip onnx and tvm,use torch for inference directly.it shows 20vid/s
@Amazingren I skip onnx and tvm,use torch for inference directly.it shows 20vid/s
Cool! Thanks for your advice, I will try this way later!
@Amazingren I skip onnx and tvm,use torch for inference directly.it shows 20vid/s
Hi, could you provide example code without tvm?
@Amazingren I skip onnx and tvm,use torch for inference directly.it shows 20vid/s
Hi, could you provide example code without tvm?
I have the code without tvm,do you need now/
@Amazingren I skip onnx and tvm,use torch for inference directly.it shows 20vid/s
Hi, could you provide example code without tvm?
I have the code without tvm,do you need now/
Can I get the code? I would be very grateful if you could do that.
The code format seems to have changed when I paste it on github, try to check the difference between code and above screenshot .And my code run on a ubuntu pc, maybe a little difference on nvidia board.
My code operating environment is python=3.6.13, pytorch = 1.4.0,numpy=1.19.2 .
At 2021-05-21 16:23:08, "MyungHwanSung" @.***> wrote:
I really apropriciate for you but some error still occur ..
python3 online_demo/main.py Open camera... [ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src0 reported: Internal data stream error. [ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (886) open OpenCV | GStreamer warning: unable to start pipeline [ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created <VideoCapture 0x7f171a1110> Build transformer... /home/ice/.local/lib/python3.6/site-packages/torchvision-0.7.0-py3.6-linux-aarch64.egg/torchvision/transforms/transforms.py:257: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead. "please use transforms.Resize instead.") Traceback (most recent call last): File "online_demo/main.py", line 290, in main() File "online_demo/main.py", line 201, in main transform = get_transform() File "online_demo/main.py", line 113, in get_transform GroupNormalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) TypeError: object() takes no parameters
Thanks for help
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
I soloved problem thanks to you Thanks !!!!!!!!!!!!! but only 0.2vid/s T.T
@Amazingren I skip onnx and tvm,use torch for inference directly.it shows 20vid/s
Can you share your demo code which don't use tvm and oxnn with me? Thank you.
Hi Lin,
Thanks for your impressive work. However, when I want to have a try of your online model on Jetson Nano. It shows only 0.7 vid/s, extremely slow. And I do have followed your guidance. And my environment is as follows:
Is there any operation for me to speed up the recognition speed?
I am looking forward to your reply.
Best wishes, Bin