roboflow / inference

A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
https://inference.roboflow.com
Other
1.12k stars 84 forks source link

Add support for a tunnel to expose inference server to remote calls #451

Closed iurisilvio closed 3 weeks ago

iurisilvio commented 4 weeks ago

Description

Embed a custom tunnelmole-client to expose a tunnel.

inference server start --tunnel --roboflow-api-key <api_key>

Our custom tunnelmole-service authorize the tunnel initialization based on roboflow api key.

Type of change

Experimental feature

How has this change been tested, please provide a testcase or example of how you tested the change?

I built from source and run it local. Works nice!

Easy to test calling some universe model with a remote image:

# inference will output some random url to be used here
export INFERENCE_HOST=https://somethingrandom.roboflow.run
curl -X POST "$INFERENCE_HOST/rock-paper-scissors-sxsw/14?image=https://source.roboflow.com/c8QoUtY71EUIn6gsXQMkSt8K0fC3/ST1zWYMcgt8HFqFBZNJy/original.jpg"

Any specific deployment considerations

I pull a custom tunnelmole image. This image is in roboflow GCR with public access.

Docs

Not changed docs yet, but new docs required.

CLAassistant commented 4 weeks ago

CLA assistant check
All committers have signed the CLA.

iurisilvio commented 3 weeks ago

I added some docs and also added a hook to stop the tunnel container when inference server stop.