Closed YoungjaeDev closed 2 months ago
Hi @YoungjaeDev, you do not need to provide API_KEY if you are using foundational models. Consider below example:
source /path/to/venv/where/inference/is/installed/bin/activate
inference server start
inference infer --input /path/to/file.jpg --model_id yolov8m-seg-640
# yields {'time': 0.24999908400002369, 'image': {'width': 3000, 'height': 4000}, 'predictions': [ ... ]}
However as soon as you want to use a non-foundational model (i.e. model you trained) you need to provide API key:
source /path/to/venv/where/inference/is/installed/bin/activate
inference server start
inference infer --input /path/to/file.jpg --model_id my_private_model --api
-key <secret>
Hi @YoungjaeDev, I will go ahead and close this issue. Please reopen if you would like to share more context.
Search before asking
Question
Due to the roboflow inference structure, even if I simply use a local RTSP_STREAM, do I still need to start the server and enter the API_KEY? Or is there another way?
Additional
No response