triton-inference-server / fil_backend

FIL backend for the Triton Inference Server
Apache License 2.0
68 stars 35 forks source link

How to limit the number of CPU cores used by fil backend? #385

Open SkyM31 opened 5 months ago

SkyM31 commented 5 months ago

I deployed multiple models on the Triton server(docker), BERT models are using GPU, and XGBoost models are using CPU. Now I want to limit the number of CPU cores used by fil backend to avoid affecting other services. So what should I do?

  1. Edit model configuration in config.pbtxt? The Rate Limiter does not seem to accurately limit the number of CPU cores used.
  2. Use tritonserver --backend-config=fil,xxx:xxx? It seems that only onnx backend and tensorrt backend provides command-line configuration, I didn't find similar information in document of fil backend.
  3. limit the cpus when docker run? If the XGBoost model runs under high load, will it affect the overall performance of the tritonserver?