kedro-org / kedro

Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.
https://kedro.org
Apache License 2.0
9.92k stars 903 forks source link

Improving the I/O transparency with `kedro run` #1691

Closed noklam closed 7 months ago

noklam commented 2 years ago

Introduction

With a highly parameterized configuration (Jinja, hydra or OmegaConf), it is not easy to troubleshoot data easily. Often, it is useful to get the full path so users can inspect the data manually. Currently, users need to hack into context and do yaml.dump to get this information.

i.e. s3://{base_path}//{special_parameter} -> Should be compile to s3://prefix/filename

Ultimately, the goal is to provide full transparency about the I/O within a kedro run, user should be able to get this information for logging or reproducing a particular experiment.

Background

Related Issues:

  1. kedro compile - basically a feature to generate the compiled version of configuration at run-time, potentially also with catalog.dumps which is more suitable for Jupyter workflow. (It should log the full path in case relatively path is used)
  2. https://github.com/kedro-org/kedro/issues/1580 - the run-time load_version isn't available to users with VersionedDataSet and it's something that we need to fix.
  3. Enriching the logging message within kedro run - potentially with some DEBUG level message

Rollout strategy

There should be no breaking changes, 1 & 2 can be done in parallel. For 3 we can default with no changes and optionally expose more verbose logging.

datajoely commented 2 years ago

I would like to say that dbt compile is a nice workflow that we could take inspiration from. They have two folders models and target this is essentially like src and dist for us in some ways. The target part is a neat way of showing what SQL will actually get passed to the database.

https://docs.getdbt.com/reference/commands/compile

antonymilne commented 2 years ago

The compiled filepath is actually already available in the data catalog, just it's quite hidden away: catalog._get_dataset("dataset_name")._filepath or catalog.datasets.dataset_name._filepath.

  1. kedro compile: I understand the need for this and I think there's room for something here, but not sure that a new kedro compile command is the right way to handle it. While it's not breaking, adding a new CLI command is quite a big thing that's not so easily reversed. The correct solution here should become clearer as we solve the configuration question. e.g. OmegaConf.resolve does pretty much what we want here already, so if we use OmegaConf then the question would be how we want to expose that resolved config - maybe it would be more naturally done through a hook writing to a file rather than a separate command.

  2. Agreed, but I think this and the heart of the issue here are just symptoms of a more general issue - see below.

  3. This is actually already done with a DEBUG, just no one knows about it... It includes the filepath and the version information. We should maybe consider the idea I mentioned in https://github.com/kedro-org/kedro/issues/1461 about having a verbose flag that adapts logging level automatically and would make it more obvious that you can change verbosity of logs. image

Fundamental issue

I think this and #1580 are actually just symptoms of a more fundamental underlying issue: the API and underlying workings of io.core and io.data_catalog are very confusing and should be rethought in general. These are very old components in kedro and maybe some of the decisions that were originally made about their design should be revised. I think there's also very likely to be old bits of code there that could now be removed or renamed (e.g. who would guess that something named add_feed_dict is used to add parameters to the catalog?). It feels like tech debt rather than intentional design currently.

I don't think they're massively wrong as it stands, but I think it would be a good exercise to go through them and work out exactly what functionality we should expose in the API and how we might like to rework them. e.g. in the case raised here there is quite a bit of confusion about how to get the filepath:

So I think we should look holistically at the structures involved here and work out what the API should look like so there's one, clear way to access the things that people need to access. I actually don't think this is such a huge task. Then we can tell much more easily whether we need any new functionality in these structures (like a catalog.dumps) or whether it's just a case of making what we already have better organised, documented and clearer.

antonymilne commented 2 years ago

Curious what @deepyaman thinks of this. He may well have the honour of being the most familiar person playing around with io! So hopefully he can give a more informed evaluation of its current state, points of confusion and what could be improved. The above are just things I've observed, usually from trying to answer people questions about, e.g. "how do I find the filepath from the catalog", and then getting confused myself going through the code to try and work it out.

noklam commented 2 years ago
  1. Generally agree, I don't think the implementation will be complicated, we just need to decide how we expose this feature to our user. The downside of using hook is it's generally hard to turn it off, you need to go into setting.py which is not a common user-facing file. You probably only need it once in a while for troubleshooting.
  2. Agree
  3. 🤯 I have no idea about this