triton-inference-server / server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
BSD 3-Clause "New" or "Revised" License
8.39k stars 1.49k forks source link

test: Add python backend tests for the new histogram metric #7540

Closed yinggeh closed 3 months ago

yinggeh commented 3 months ago

What does the PR do?

Tests histogram metric in custom_metrics.

Checklist

Commit Type:

Check the conventional commit type box here and add the label to the github PR.

Related PRs:

https://github.com/triton-inference-server/vllm_backend/pull/56 https://github.com/triton-inference-server/python_backend/pull/374 https://github.com/triton-inference-server/core/pull/386

Where should the reviewer start?

n/a

Test plan:

n/a

Caveats:

n/a

Background

Customer requested histogram metrics from vLLM.

Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)

n/a

rmccorm4 commented 3 months ago

Please trigger a pipeline encapsulating all the latest changes so can feel confident in the CI impact when looking at the cherry-picks

yinggeh commented 3 months ago

Merged https://github.com/triton-inference-server/server/pull/7525 to the wrong branch. Need re-approval to merge to main.