Open choi119 opened 2 months ago
I'm sorry, I don't know what to do.
Hello @choi119 ,
Could you provide more information about what is not working? How you are you downloading the models? What version are you using? Could you provide the output of your code snippet and any other relevant logs.
@KrishnanPrash Hello I made a peopleNet inference request to a Triton server outside. However, the output value is not correct. So I visualized it. But the bounding box was pointing to a strange place. You used the shared test1.py . What should I modify?
It works normally when deduced using the deepstream app both inside and outside the server.
But if you don't use the deepstream app, the output is weird.
@KrishnanPrash
I used this.
arstest@arstest:~$ cat deepstream.sh docker run --gpus=all -it --runtime=nvidia --rm --shm-size=2g --memory=16g -p 8000:8000 -p 8001:8001 -p 8002:8002 --name triton --net=host --privileged -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.4 nvcr.io/nvidia/deepstream:6.4-triton-multiarch
arstest@arstest:~$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE nvcr.io/nvidia/deepstream 6.4-triton-multiarch a3af5eff6a88 9 months ago 16.2GB
I used this.
/opt/nvidia/deepstream/deepstream-6.4/samples/configs/tao_pretrained_models/prepare_triton_models.sh
@KrishnanPrash I used this.
/opt/tritonserver/bin/tritonserver --model-repository=/opt/nvidia/deepstream/deepstream/samples/configs/tao_pretrained_models/triton
Description A clear and concise description of what the bug is.
Triton Information What version of Triton are you using?
triton 2.37.0
docker run --gpus=all -it --runtime=nvidia --rm --shm-size=2g --memory=16g -p 8000:8000 -p 8001:8001 -p 8002:8002 --name triton --net=host --privileged -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.4 nvcr.io/nvidia/deepstream:6.4-triton-multiarch
Are you using the Triton container or did you build it yourself?
To Reproduce Steps to reproduce the behavior.
Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well).
Expected behavior A clear and concise description of what you expected to happen.
I would like to make an inference request to peopleNet on the Triton server and visualize the value. But I keep having problems. Please take a look at this code.
I don't know what to do.