roboflow / inference

A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
https://inference.roboflow.com
Other
1.15k stars 85 forks source link

If I try to make an inference with a local RTSP, I still need to run API_KEY and server, right? #383

Closed YoungjaeDev closed 2 months ago

YoungjaeDev commented 2 months ago

Search before asking

Question

Due to the roboflow inference structure, even if I simply use a local RTSP_STREAM, do I still need to start the server and enter the API_KEY? Or is there another way?

Additional

No response

grzegorz-roboflow commented 2 months ago

Hi @YoungjaeDev, you do not need to provide API_KEY if you are using foundational models. Consider below example:

source /path/to/venv/where/inference/is/installed/bin/activate
inference server start
inference infer --input /path/to/file.jpg --model_id yolov8m-seg-640
# yields {'time': 0.24999908400002369, 'image': {'width': 3000, 'height': 4000}, 'predictions': [ ... ]}

However as soon as you want to use a non-foundational model (i.e. model you trained) you need to provide API key:

source /path/to/venv/where/inference/is/installed/bin/activate
inference server start
inference infer --input /path/to/file.jpg --model_id my_private_model --api
-key <secret>
grzegorz-roboflow commented 2 months ago

Hi @YoungjaeDev, I will go ahead and close this issue. Please reopen if you would like to share more context.