roboflow / inference

A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
https://inference.roboflow.com
Other
1.3k stars 116 forks source link

add api key fallback for model monitoring #366

Closed hansent closed 5 months ago

hansent commented 5 months ago

Description

The model monitoring feature only works if a global API key is provided when the server is started. This change modifies the model manager to set a fallback api for the pingback when it sees a API key used for inference requests so we can send model monitoring data by default without requiring a separate / global api key to be set

Type of change

Please delete options that are not relevant.

How has this change been tested, please provide a testcase or example of how you tested the change?

locally running cpu docker container, observing debug logs and sending inference requests via postman

Any specific deployment considerations

n/a

Docs

n/a

probicheaux commented 5 months ago

(I merged bypassing protections because there was effectively 1 author and 1 reviewer, not 2 authors)