NVIDIA-AI-IOT / deepstream_tao_apps

Sample apps to demonstrate how to deploy models trained with TAO on DeepStream
MIT License
376 stars 96 forks source link

multi-stream RTSP support #6

Open PythonImageDeveloper opened 4 years ago

PythonImageDeveloper commented 4 years ago

Hello, 1- This repo only work on jetpack 4.4 and deepstream 5.0 and TLT 2.0? 2- For Detectnet-V2 , is it possible to run multi-stream RTSP support? HOW? 3- If I want to run other codes along with deep stream, I want to do multi-stream RTSP decoding with HW deocer of jetson nano and pass some RTPS decoded to deep stream your repo and some RTSP decoded to my own python code for other progressing, Is is possible? I want to do this with docker. 4- In the models folder, only bin and etlt file existed, Is it enough for running? and If I want to put my training models ones of six-models but different input size and dataset, Is it possible to run with this repo codes related to its model? 5- Deep stream accept both TensorRT engine file and etlt file, but the TensorRT engine file is hardware dependent. Which mode has high FPS in inference?

morganh-nv commented 4 years ago

1) Yes 2) It is possible. Refer to https://forums.developer.nvidia.com/t/multi-stream-rtsp-on-jetson-nano/122357 3) Please create topic in DS forum https://forums.developer.nvidia.com/c/accelerated-computing/intelligent-video-analytics/deepstream-sdk/15 4) For int8, there should be cal.bin, etlt model and your API key. For fp16/fp32, only etlt model and your API key are needed. Sure. you can run with your own etlt model. Need to specifiy in the cnfig files. See https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#intg_model_deepstream 5) During running with DS, actually the etlt model will be converted to trt engine. So, the inference performance should be the same.