-
HI
I'm trying to run the following command _**sudo ./deepstream-test5-analytics -c config/test5_config_file_src_infer_tlt.txt**_ under this path _**/opt/nvidia/deepstream/deepstream-5.0/sources/apps…
-
Can anyone help me to write my preprocess? I didnt get any bounding box after running deepstream app. Thank you so much
![image](https://user-images.githubusercontent.com/70887055/118452992-eabca480-…
-
I modified my makefile according to the answer given here, to accommodate Deepstream 5.1:
https://forums.developer.nvidia.com/t/fatal-error-cuda-runtime-api-h-no-such-file-or-directory-when-compili…
-
Hi,
Firstly thanks for the project. This is awesome.
I have few doubts-
1) When i run this with dockerised tritonserver based deepstream image and follow the steps, it runs awesome.
But how can…
-
## Bug Description
I cannot convert a TorchScript module because of the error:
```
RuntimeError: [Error thrown at core/partitioning/shape_analysis.cpp:116] Unable to process subgraph input type …
-
I am using a Jetson AGX with JetPack 4.4.1 (R32.4.4) and the nvcr.io/nvidia/l4t-pytorch:r32.4.4-pth1.6-py3 docker image from NGC.
Installing setup.py from both 19.10 and 20.03 branches yield issues:
…
-
I feel like I'm probably missing a step somewhere but am unable to boot using the `tegra-demo-distro` or a custom image built off of `meta-tegra`. I am not using mender in either build. I am buildin…
-
I'm using an onnx model (converted from pytorch - mmdetection) on triton inference srever in Deepstream 5.1 SDK. I have been getting following error:
```
I0317 18:31:58.856597 84 model_repository_…
-
[https://forums.developer.nvidia.com/t/conversion-of-model-weights-for-human-pose-estimation-model-to-onnx-results-in-nonsensical-pose-estimation/164417/13](https://forums.developer.nvidia.com/t/conve…
-
Hi, Thank you for your contribution.
Could you advise where to start given we are interested in use of Deepstream integrattion with the most recent Jetpack 4.5.1 release of the Jetson OS , please?