triton-inference-server / server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
BSD 3-Clause "New" or "Revised" License
8.39k stars 1.49k forks source link

ci: Reducing flakiness of `L0_python_api` #7674

Closed KrishnanPrash closed 1 month ago

KrishnanPrash commented 1 month ago

What does the PR do?

Currently, if an inference request is sent with the tritonclient and the gRPC frontend is shutdown, the StatusCode and message returned varies depending on the timing of the operations. Modified test_grpc_req_during_shutdown to accept multiple possible StatusCode values.