A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
[X] I have searched the Inference issues and found no similar feature requests.
Description
We have now run() and run_remotely() blocks methods. Additionally, there is only one single flag deciding on local vs remote run
Probably better idea would be to have only run() and be able to decide on local vs remote run at the step level, based on environment configuration.
Original idea: https://github.com/roboflow/inference/pull/343#discussion_r1585670447
Search before asking
Description
We have now
run()
andrun_remotely()
blocks methods. Additionally, there is only one single flag deciding on local vs remote run Probably better idea would be to have onlyrun()
and be able to decide on local vs remote run at the step level, based on environment configuration. Original idea: https://github.com/roboflow/inference/pull/343#discussion_r1585670447Use case
No response
Additional
No response
Are you willing to submit a PR?