The _inference service uses the _ml/trained_models API to deploy models for use in the inference service. However, users are still able to manage these deployments using the trained_models API. To make it clear to users when they should not make changes to a trained_model because it is used by the inference service, we should report associations between _inference service models and _ml/trained_models deployments.
Currently, it is possible to obtain this association by performing GET /_inference/_all request and GET _ml/trained_models/_all/_stats and find all instances where the model_id returned by the inference API matches a deployment_id in the _ml/trained_models API.
However, we will want to make this into a single API call. We will want the association to be displayed in the trained models stats API (which will require putting that information on deployment), and we will want to report hat information in a _stats endpoint in the inference service.
Description
The _inference service uses the _ml/trained_models API to deploy models for use in the inference service. However, users are still able to manage these deployments using the trained_models API. To make it clear to users when they should not make changes to a trained_model because it is used by the inference service, we should report associations between _inference service models and _ml/trained_models deployments.
Currently, it is possible to obtain this association by performing
GET /_inference/_all
request andGET _ml/trained_models/_all/_stats
and find all instances where themodel_id
returned by the inference API matches adeployment_id
in the _ml/trained_models API.However, we will want to make this into a single API call. We will want the association to be displayed in the trained models stats API (which will require putting that information on deployment), and we will want to report hat information in a _stats endpoint in the inference service.