-
Supporting parallel multiple models inferencing in one pipeline
eyu11 updated
7 months ago
-
The export seems to work. Training goes fine. The prediction with ultralytics works fine. But deepstream v8 does not create boxes.
There must be an issue still with the export. I use the export_yolov…
-
Hi,
@marcoslucianops Great job!
I followed this seeedstudio wiki : [https://wiki.seeedstudio.com/YOLOv8-DeepStream-TRT-Jetson/](https://wiki.seeedstudio.com/YOLOv8-DeepStream-TRT-Jetson/)
Works g…
-
Dear,
I have made the same as you explain but when i run the code i have this erreor : "pgie could not be created. Exiting.\n"
Can you provide me a docker images that work with you code please
…
-
## Env
- GPU: Tesla T4
- Cuda version: Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
- TensorRT version: 8.5.2.2
## About this repo
- commit: c66…
-
When pulling RTSP video streams, the source is added above 16, there is an issue of not being able to link to the tee. The following is an error message
![1701400998614](https://github.com/NVIDIA-AI-…
-
DoD:
- [x] Update config files to test out a camera and the RTSP sink.
- [x] Create a scenario for final application, and define the boundaries for a security system which gives count of people in…
-
Hi,
I wanted to ask whether it would be a good idea to change the location of the generated engine files to be the same as the original onnx files, and also keep the original name but change only t…
-
deepstream:6.2
dsl=commit: 0.27.b.alpha
pipeline
```cpp
// Create a list of Pipeline Components to add to the new Pipeline.
const wchar_t* components[] = {L"uri-source-1", L"primary-gie",
…
-
CUDA Driver Version: 11.4
CUDA Runtime Version: 11.4
TensorRT Version: 8.5.2
cuDNN Version: 8.6
crnn onnx model input dim=(1,1,32,160),output dim=(41,1,11).use trtexec transform the onnx model to …