an even cooler feature might be to just have a global that is essentially
mlflow.prefer_deployment_server() equivalent (env var?) that automatically applies so that consumption code doesn't have to mutate and it's a config-level op.
I'm right now using a simple heuristic to match the model with deployment server (provider, task, model id prefix) which works fine
Motivation
What is the use case for this feature?
Why is this use case valuable to support for MLflow users in general?
Why is this use case valuable to support for your project(s) or organization?
Why is it currently difficult to achieve this use case?
Details
No response
What component(s) does this bug affect?
[ ] area/artifacts: Artifact stores and artifact logging
[ ] area/build: Build and test infrastructure for MLflow
Willingness to contribute
Yes. I can contribute this feature independently.
Proposal Summary
maybe better designed as a load-time arg vs my hardcoded patch?
mlflow.pyfunc.load_model(, model_config={"endpoint/deployment_id": "foo"})
and oai auto injects that as a model config
an even cooler feature might be to just have a global that is essentially
mlflow.prefer_deployment_server() equivalent (env var?) that automatically applies so that consumption code doesn't have to mutate and it's a config-level op.
I'm right now using a simple heuristic to match the model with deployment server (provider, task, model id prefix) which works fine
Motivation
Details
No response
What component(s) does this bug affect?
area/artifacts
: Artifact stores and artifact loggingarea/build
: Build and test infrastructure for MLflowarea/deployments
: MLflow Deployments client APIs, server, and third-party Deployments integrationsarea/docs
: MLflow documentation pagesarea/examples
: Example codearea/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registryarea/models
: MLmodel format, model serialization/deserialization, flavorsarea/recipes
: Recipes, Recipe APIs, Recipe configs, Recipe Templatesarea/projects
: MLproject format, project running backendsarea/scoring
: MLflow Model server, model deployment tools, Spark UDFsarea/server-infra
: MLflow Tracking server backendarea/tracking
: Tracking Service, tracking client APIs, autologgingWhat interface(s) does this bug affect?
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows
: Windows supportWhat language(s) does this bug affect?
language/r
: R APIs and clientslanguage/java
: Java APIs and clientslanguage/new
: Proposals for new client languagesWhat integration(s) does this bug affect?
integrations/azure
: Azure and Azure ML integrationsintegrations/sagemaker
: SageMaker integrationsintegrations/databricks
: Databricks integrations