roboflow / inference

A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
https://inference.roboflow.com
Other
1.3k stars 116 forks source link

Default to Local Workflows Execution #515

Closed yeldarby closed 2 months ago

yeldarby commented 2 months ago

Description

When running outside of Docker, our default was to send inference requests to the Roboflow Hosted API. That is slower & defeats the purpose of self-hosting. This swaps to the more logical default.

Note: for most folks this won't have any impact because we already set the right env vars in our Dockerfiles.

Type of change

How has this change been tested, please provide a testcase or example of how you tested the change?

On my local machine.

Any specific deployment considerations

No

Docs