SeldonIO / MLServer

An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more
https://mlserver.readthedocs.io/en/latest/
Apache License 2.0
687 stars 177 forks source link

Inference parameters are not passed to the predict function in MLFlow runtime #1660

Open Okamille opened 5 months ago

Okamille commented 5 months ago

How to reproduce

Define a MLFlow model, using a custom params parameter. Example from the MLFlow documentation:

class ModelWrapper(PythonModel):
    def __init__(self):
        self.model = None

    def load_context(self, context):
        from joblib import load

        self.model = load(context.artifacts["model_path"])

    def predict(self, context, model_input, params=None):
        params = params or {"predict_method": "predict"}
        predict_method = params.get("predict_method")

        if predict_method == "predict":
            return self.model.predict(model_input)
        elif predict_method == "predict_proba":
            return self.model.predict_proba(model_input)
        elif predict_method == "predict_log_proba":
            return self.model.predict_log_proba(model_input)
        else:
            raise ValueError(f"The prediction method '{predict_method}' is not supported.")

Log that model into MLFlow, and try to serve it using MLServer. Create an inference request using a specific parameter. The parameter value is not taken into account.

Proposed fix

Currently, the predict function in the MLFlow runtime does not pass the params parameter in the prediction. We can add it, like it was done for the custom invocation endpoint.

PR