A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
Embed a custom tunnelmole-client to expose a tunnel.
inference server start --tunnel --roboflow-api-key <api_key>
Our custom tunnelmole-service authorize the tunnel initialization based on roboflow api key.
Type of change
Experimental feature
[x] New feature (non-breaking change which adds functionality)
How has this change been tested, please provide a testcase or example of how you tested the change?
I built from source and run it local. Works nice!
Easy to test calling some universe model with a remote image:
# inference will output some random url to be used here
export INFERENCE_HOST=https://somethingrandom.roboflow.run
curl -X POST "$INFERENCE_HOST/rock-paper-scissors-sxsw/14?image=https://source.roboflow.com/c8QoUtY71EUIn6gsXQMkSt8K0fC3/ST1zWYMcgt8HFqFBZNJy/original.jpg"
Any specific deployment considerations
I pull a custom tunnelmole image. This image is in roboflow GCR with public access.
Description
Embed a custom
tunnelmole-client
to expose a tunnel.Our custom
tunnelmole-service
authorize the tunnel initialization based on roboflow api key.Type of change
Experimental feature
How has this change been tested, please provide a testcase or example of how you tested the change?
I built from source and run it local. Works nice!
Easy to test calling some universe model with a remote image:
Any specific deployment considerations
I pull a custom tunnelmole image. This image is in roboflow GCR with public access.
Docs
Not changed docs yet, but new docs required.