Galileo-Galilei / kedro-mlflow

A kedro-plugin for integration of mlflow capabilities inside kedro projects (especially machine learning model versioning and packaging)
https://kedro-mlflow.readthedocs.io/
Apache License 2.0
194 stars 29 forks source link

Kedro hooks do not work inside packaged pipeline for serving #404

Open Lasica opened 1 year ago

Lasica commented 1 year ago

Description

When I package a kedro pipeline for serving with modelify my kedro hooks no longer work.

Context

I was using hooks for initialization of some structures and had to make workarounds because hooks are not called. I'd like for hooks "before_pipeline_run" or "after_catalog_creation" to be called once when the server starts serving. https://kedro.readthedocs.io/en/stable/hooks/introduction.html

Steps to Reproduce

  1. Make a pipeline with hooks
  2. use kedro mlflow modelify ... to package it for serving
  3. start serving with mlflow models serve ...
  4. observe that hooks are not run

I may publish minimal example code later to check it out.

Expected Result

Hooks should run.

Actual Result

They do not. Tell us what happens instead.

Your Environment

Include as many relevant details about the environment in which you experienced the bug:

Does the bug also happen with the last version on master?

Yes.

Galileo-Galilei commented 1 year ago

Hi @Lasica, you are right that hooks do not run when the pipeline is stored as a mlflow model, but this was intended. In fact I think most hooks should not be packaged with the pipeline, e.g.:

  1. hooks which need to read a configuration file for prediction, because the model is supposed to be configuration independent
  2. hooks which intend to write data to a third party filesystem / database, for instance for logging or versioning data, because the model must be able to run in a location where it has no write access
  3. hooks which are not linked to the inference pipeline (hooks are declared for the entire kedro projects, and most of them may not be useful for inference)

That said, I understand some hooks may be useful (e.g. if you have a custom hooks which validate your data between nodes).

I think the solution may be to enable passing hooks manually to the CLI to specify the ones you want to store with the model. Can you tell me more about your use case?

Lasica commented 1 year ago

Hmm I store configuration as params in kedro and they are packaged together with other dependencies, so I'm not sure if the 1st point is all valid, as such configuration is saved together with the model.

My use case is - I'm attempting to add a model monitoring (in particular data drift) to my inference pipeline used for serving, that exposes those metrics to Prometheus to be scraped (or pushed to pushgateway). Such class needs some initialization with reference dataset, address of pushgateway and other values. In the inference pipeline I've added a node at the end that passes the data to monitoring metrics calculation. I wanted to make a hook before_pipeline_run or after_catalog_created that would initialize this monitoring class, so that it's done once and I can call the monitoring node with minimal necessary arguments.

Currently I needed a workaround to add initialization arguments to the monitoring node and check every time if the monitoring class singleton is already initialized or not.

I want this hook to only run for inference pipeline.

Galileo-Galilei commented 1 year ago

I understand the use, case, but I think this is a very bad design to hide such processing inside the mlflow model. The mlflow model is supposed to be a pure data pipeline which takes a single input (a dataframe / an array) and returns a single output (a series /dataframe array witrh predictions).

The main drawback of hiding your monitoring node inside the model is that you will make your model very dependent of the environnement where you deploy it. If your model is in an environment has no access to Prometheus, has credentials issues, has timeout issues, should run on a cluster with restricted permissions... you will need to recreate a mlflow model for each situation. This is supposed to be completely independent.

IMHO, the correct way to do this is the following :

This is described in this tutorial and I think it makes sense for your use case, what do you think?

Lasica commented 1 year ago

You make valid points about flexibility of this inference pipeline model with co-located monitoring. I wasn't aware it's easy to re-use pipeline as a node in that way. Thanks for that!

I think this solution of monitoring has some pros and cons, what you suggest does not help address that the model will be dependent on environment, you will just have 2 of them, one of them will be. It depends on how fail-safe the monitoring is written whether this environment dependency will be an issue.

This is however, a bit off the subject of including hooks in the pipeline however, as I still think there should be a way to include some of them in the model. I see that you look at this functionality of packaging pipelines as models in light of ideal model abstraction that does just calculations. However this functionality gives us extra flexibility to add some utilities around model and easy way to access serving, if one wishes so. I think that hooks are part of that flexibility and it would be nice to have a way to whitelist them.

Galileo-Galilei commented 1 year ago

what you suggest does not help address that the model will be dependent on environment

It's true, but actually this is still more flexible and robsut to change. Say you want to change your monitoring node:

➡️ If I understand correctly, it is quite clear that the modelify command is not really what you need. You just wa[nt to leverage mlflow capabilities to create an API automatically from a kedro pipeline. I understand this and AFAIK, there is no real better solution for now. I hope that kedro-serving will close the gap one day, but it's not even close for now. In a ideal world, you'd be able to kedro serve --pipeline and it will be automatically served as an API and be more flexible that what mlflow offers.

a bit off the subject of including hooks in the pipeline however, as I still think there should be a way to include some of them in the model

My opinion here is the following:

If you want to help through a PR:

  1. Add a hooks argument (maybe a tuple of Hook) here and store it as an attribute: https://github.com/Galileo-Galilei/kedro-mlflow/blob/9df9c23b3ceb88fada34dc7bc939992b08c807bf/kedro_mlflow/mlflow/kedro_pipeline_model.py#L16-L23
  2. Change the hook registration logic here : https://github.com/Galileo-Galilei/kedro-mlflow/blob/9df9c23b3ceb88fada34dc7bc939992b08c807bf/kedro_mlflow/mlflow/kedro_pipeline_model.py#L212-L214 . We should check how kedro register hooks to the new hook_manager created at runtime.
Lasica commented 1 year ago

Thanks for the well thought response.

In some rare situations, it may be needed to execute a hook during inference (e.g. to validate data, or to process data at runtime ouside node for some complex reasons), so I need to offer support for this use case,

That's all I was looking for as confirmation. I'll look at the code and try to start a PR later this/next week.

With my solution, the model is "fixed" (it is stored in a given run once for all), and you may change your extra node as much as you want with no need to execute the modelify comand again

I'm not sure I understand this. Wouldn't the monitoring pipeline also need to be packaged as a model to be served (for example by MLFlow)?