triton-inference-server / server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
BSD 3-Clause "New" or "Revised" License
8.38k stars 1.49k forks source link

ci: modifying stat count for `L0_server_status` #7820

Closed KrishnanPrash closed 5 days ago

KrishnanPrash commented 5 days ago

What does the PR do?

This PR modifies the stat count in ModelMetadataTest::test_infer_stats_no_model() to account for potential changes made to qa_model_repository. Additionally, to assist in future debugging, a request is sent to the model repository index API to be able to catch any changes made to qa_model_repository by comparing job logs.