A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
When running outside of Docker, our default was to send inference requests to the Roboflow Hosted API. That is slower & defeats the purpose of self-hosting. This swaps to the more logical default.
Note: for most folks this won't have any impact because we already set the right env vars in our Dockerfiles.
Type of change
[x] Bug fix (non-breaking change which fixes an issue)
How has this change been tested, please provide a testcase or example of how you tested the change?
Description
When running outside of Docker, our default was to send inference requests to the Roboflow Hosted API. That is slower & defeats the purpose of self-hosting. This swaps to the more logical default.
Note: for most folks this won't have any impact because we already set the right env vars in our Dockerfiles.
Type of change
How has this change been tested, please provide a testcase or example of how you tested the change?
On my local machine.
Any specific deployment considerations
No
Docs