A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
[X] I have searched the Inference issues and found no similar bug report.
Bug
Currently, there is no control over timeouts while making internal requests in library:
we need to expose such control knob from functions making request
set reasonable default that can be controlled via env
otherwise we may end up having issues with such components like InferencePipeline attempting to communicate with RF backend - for instance while running workflows
Search before asking
Bug
Currently, there is no control over timeouts while making internal requests in library:
InferencePipeline
attempting to communicate with RF backend - for instance while running workflowsEnvironment
No response
Minimal Reproducible Example
No response
Additional
No response
Are you willing to submit a PR?