A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
I'm working on making the core app be able to connect to and communicate with self-hosted inference servers. We have CORS off by default which means you can't communicate with the server from a web browser.
This seems like a bad default because it prevents you from hitting it from any web service you're building. We have it on already for our Hosted API via the lambda config.
Type of change
[x] New feature (non-breaking change which adds functionality)
How has this change been tested, please provide a testcase or example of how you tested the change?
With a self-hosted server locally & via ngrok connecting with my web dev environment.
Description
I'm working on making the core app be able to connect to and communicate with self-hosted inference servers. We have CORS off by default which means you can't communicate with the server from a web browser.
This seems like a bad default because it prevents you from hitting it from any web service you're building. We have it on already for our Hosted API via the lambda config.
Type of change
How has this change been tested, please provide a testcase or example of how you tested the change?
With a self-hosted server locally & via ngrok connecting with my web dev environment.
Any specific deployment considerations
No
Docs