A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
The model monitoring feature only works if a global API key is provided when the server is started. This change modifies the model manager to set a fallback api for the pingback when it sees a API key used for inference requests so we can send model monitoring data by default without requiring a separate / global api key to be set
Type of change
Please delete options that are not relevant.
[X] Bug fix (non-breaking change which fixes an issue)
How has this change been tested, please provide a testcase or example of how you tested the change?
locally running cpu docker container, observing debug logs and sending inference requests via postman
Description
The model monitoring feature only works if a global API key is provided when the server is started. This change modifies the model manager to set a fallback api for the pingback when it sees a API key used for inference requests so we can send model monitoring data by default without requiring a separate / global api key to be set
Type of change
Please delete options that are not relevant.
How has this change been tested, please provide a testcase or example of how you tested the change?
locally running cpu docker container, observing debug logs and sending inference requests via postman
Any specific deployment considerations
n/a
Docs
n/a