triton-inference-server / server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
BSD 3-Clause "New" or "Revised" License
8.3k stars 1.48k forks source link

Abnormal system memory usage while enabling GPU metrics #7144

Open SkyM31 opened 6 months ago

SkyM31 commented 6 months ago

Description There is an abnormal system memory usage while enabling GPU metrics. enable GPU metrics: command: tritonserver --model-repository=/models after a long time waiting 185854 Triton Server Started successfully, and it uses about 52GB of system memory!

and disable GPU metrics: command: tritonserver --model-repository=/models --allow-gpu-metrics=false triton server immediately started. image Now it just uses a little system memory.

I think this problem may be related to the GPU driver version or CUDA version, rather than the Triton version. It seems that there are some problems with the coordination between Triton and the latest version of GPU drivers and CUDA

Triton Information Triton Version:install from docker images:nvcr.io/nvidia/tritonserver:24.03-py3 (seems 24.02 have same problem, other verison not tested.)

My GPU: NVIDIA GeForce RTX 4060 Ti Driver Version: 550.54.15 CUDA Version: 12.4

To Reproduce Just 'docker pull nvcr.io/nvidia/tritonserver:24.03-py3' And start a container: docker run --gpus all -it --shm-size=256m -p8000:8000 -p8001:8001 -p8002:8002 -v /your/dir/:/models This problem seems to be unrelated to the type of model you are using, at least not to onnxruntime backend and tensorrt backend. entry tritonserver --model-repository=/models Press Enter and monitor the memory resource usage

SkyM31 commented 6 months ago

To add, this problem did not occur when using RTX3090, Driver Version: 535.x(may be not this version, last test with RTX3090 was a long time ago) if you execute the 'nvidia-smi' command inside the container, it will take a long time to read hardware information, even stuck. Instead of immediately obtaining GPU information image

sanaev-ie-maximtech commented 3 months ago

Hello everyone! We are seeing the same problem on NVIDIA GeForce RTX 2070 (Driver Version: 555.42.06 CUDA Version: 12.5). Tried Triton versions 23.09 and 24.07 - the problem is repeated. Launching Triton server with the parameter --allow-gpu-metrics=false solves the memory problem, but limits functionality. With NVIDIA GeForce RTX 4090 GPU (Driver Version: 555.42.06 CUDA Version: 12.5) - the launch is fine. Any news on this issue?