Closed whjmak closed 3 years ago
Hi,
It looks like that version of Deepstream was compiled for Python 3.6, so you should either use that version or symlink needed binaries.
Most of functionality of aws-streamer should work with Python 3.6, so please give it a try. Python 3.7 is needed to run multiple parallel pipelines (using schedule()). For Deepstream example you don't need that.
Let me know if this helped!
Hi, thanks for the prompt reply.
That makes sense, though I am also looking to connect everything via the Greengrass example which requires py3.7 as well?
Thanks!
I wouldn't worry about Greengrass. You're right, it says it requires Python 3.7, but it can be easily fooled with symbolic link to lower python version.. I've done that on Jetson Nano which has Python 3.6 by default. And if that doesn't work, you can always use docker image with Greengrass.
Try first to get your Deepstream app to work with Python 3.6, one problem at a time :)
Ok, thanks! Confirming that I am able to run the Deepstream pipeline with sample video (and stream it to AWS using kvssink) on Python 3.6 as per README example code. Callback function is able to print the inference results. Working on streaming inference results to AWS now.
Do you know the config to access the Jetson MIPI camera with Deepstream please? I have previously been able to stream video to kvssink using nvarguscamerasrc
but unable to process it with Deepstream. I have also played around with various configs in ds_pipeline.py but unable to make it work.
Thanks!
Great that you have it working with Python 3.6!
nvarguscamerasrc
is using NVVM memory, so you need to use the right capsfilter. You'd need to modify ds_pipeline build step with your custome pipeline and configure it using the following or similar caps:
caps_str = "video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1"
caps = Gst.caps_from_string(caps_str)
graph["your_filter"].set_property("caps", caps)
For the pipeline I would just follow these examples based on your platform: https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide/accelerated_gstreamer.html
Just confirming that Deepstream and aws-streamer is working in Python 3.6. Can close issue if you would like. Still unable to access Jetson camera, but will experiment some more and raise another issue if need.
Thanks!
Hi Bartek, thanks for making this library.
I'm trying to run the Deepstream example using a Jetson Xavier (Jetpack 4.4.1, Deepstream 5.1). When I try to run inference_numeric.py I get the following error:
I have tried installing Deepstream using the tar files on the Deepstream pages and copied pyds.so from the Deepstream libraries to the folder with inference_numeric.py. I am using Python 3.7 as required by aws-streamer but it seems pyds.so isn't compatible?
Please could you advise. Thanks!