Open Vincent-Liagre-QB opened 2 years ago
@Vincent-Liagre-QB To what extent would this be covered by #1303?
Also, just to clarify, is your goal to be able to run all experiments with a single command, only run one experiment at a time, or do either? I think I understand your requirement as running one experiment at a time, but just wanted to make sure.
Finally, since you're from QB, you can also consider an internal project called Multi-Runner--but I 100% think these issues should be resolved in the open source Kedro ecosystem in the long run!
@deepyaman ;
to your questions,
Multi-Runner
indeed ; trying to get in touch with the team as there are indeed synergies ; but at the moment it doesn't allow to run a single experiment at a timeHydra
(#1303 ): I have looked into Hydra
recently indeed but am not super familiar with it. From what I understand, it could indeed cover the need ; only: (1) the changes required are probably easier to implement ; (2) while going w. Hydra
only would provide a standard approach it would also create a dependency@Vincent-Liagre-QB Was just taking a closer look at this, including the code. To confirm my understanding of the requirements:
experiment_name
in the hierarchy.MemoryDataSet
s?I think modifying filepath based on some param/other variable isn't too bad with Hooks. Storing config for each experiment requires something extra, if not using envs (and I get your reservation on using envs).
@deepyaman to your points:
Regarding hooks: in my understanding the limitations is that once you have implemented them, you cannot easily choose whether to apply them or not. I.e. hooks are not programatically manageable.
Also, I like more to think in terms of (1) feature needs and (2) possible code implementations (which I called "requirements") and think about them separately ; so to summarise:
Feature needs:
Requirements for a possible implementation solution (note that in this case there is a 1-to-1 matching with the feature needs but not always the case)
(See in 1st message for more details)
Also for the sake of enriching the discussion, I was told to look into this: https://kedro-mlflow.readthedocs.io/en/stable/index.html ; not sure it covers the need but worth looking into ; will do
3. More like being able to retrieve all resulting datasets (incl. intermediary results) from a run so as to be able to persist the ones I want in the way I want.
My inclination is to recommend that you return them explicitly from a node. I think it lends itself well to the idea that pipelines have an interface of inputs and outputs.
Regarding hooks: in my understanding the limitations is that once you have implemented them, you cannot easily choose whether to apply them or not. I.e. hooks are not programatically manageable.
This is doable as long as you design the hooks accordingly (e.g. parse flags that determine when and where to apply the hook logic).
@Vincent-Liagre-QB I'll first try to summarize the requirements to confirm my understood about this is right.
Assuming my understanding is correct, I feel like hooks as suggested by @deepyaman might be the right way to go about. As the only difference between experiments is inputs and outputs and not the pipeline being run, you can choose which files to be loaded at run time using some pattern recognition. This might be TemplatedConfig in the latest versions though.
On integration with MLFlow, it fits perfectly to run different experiments. Ideally, all of your parameters from the experiment(especially things that differentiate the experiment) should be logged in the experiment and your models can be registered in MLFlow. I think kedro-mlflow plugin might have this capability.
Edit: A workflow could be this.
experiment_name
experiment_name
eg: data/08_reporting/model_1/${experiment_name}
conf/experiments/experiment_name.yaml
@deepyaman, on nodes, my frustration is that it would prevents from using the full capacities of pipeline ;
@avan-sh --> yes that's exactly what I have in mind
@deepyaman @avan-sh on hooks : I'll try to look more into this, but I am a bit skeptical about the possibility to programmatically manage hooks ; if you have examples, I am curious to look into them.
On integration w. ML Flow, I was just sharing this as it had been suggested it might cover my need ; but that's not the main topic :)
Re-opening this now that I have a bit of time to look into it again:
@avan-sh the workflow you shared looks promising to me ; the only thing that I have difficulties understanding is how to make sure to use the version of the params corresponding to the specified experiment_name
? Could it be with a hook ?
EDIT: my previous implementation of after_context_created
was missing self
I can access the params
with the after_context_created
hook (see below) but can't seem to modify the dict ; the hook is not supposed to return anything and I was hoping to leverage the mutability of dictionnaries but this doesn't seem to work (see test with VerificationHooks
in the implem:
Implem:
In src/kedro_tutorial/hooks.py
from kedro.framework.hooks import hook_impl
class ExperimentRunHooks:
@hook_impl
def after_context_created(self, context) -> None:
print("Inside ExperimentRunHooks")
# Trying to modify the dict of params
context.params["test_hook_param"] = 5
class VerificationHooks:
@hook_impl
def after_context_created(self, context) -> None:
print("Inside hook : VerificationHook")
print(context.params)
In src/kedro_tutorial/settings.py
:
from kedro_viz.integrations.kedro.sqlite_store import SQLiteStore
from pathlib import Path
from kedro_tutorial.hooks import ExperimentRunHooks, VerificationHooks
SESSION_STORE_CLASS = SQLiteStore
SESSION_STORE_ARGS = {"path": str(Path(__file__).parents[2] / "data")}
HOOKS = (VerificationHooks(), ExperimentRunHooks()) #LIFO order
EDIT: my previous implementation of after_context_created
was missing self
Also like pointed by @avan-sh we need a hook to inject the extra param experiment_name
into the TemplatedConfigLoader
; something like (credits to @avan-sh ):
@hook_impl
def register_config_loader(
self, conf_paths: Iterable[str], env: str, extra_params: Dict[str, Any]
) -> ConfigLoader:
globals_dict = {}
if extra_params:
globals_dict = {"experiment_name": extra_params["experiment_name"]}
return TemplatedConfigLoader(
conf_paths,
globals_pattern="*globals.yml",
globals_dict=globals_dict,
)
but I am not sure this register_config_loader
hook template exists ; when testing it, it doesn't appear to be called...
Hello! Has the suggestion of @Vincent-Liagre-QB been taken into account? It would greatly help me if so :)
@cosasha , register_config_loader
hook was replaced since kedro 0.18. Possibly the issues here might be tackled in https://github.com/kedro-org/kedro/milestone/9. Possibly, someone from the maintainer team could comment on this.
Similar request from @ofir-insait from a month ago:
As stated by @Vincent-Liagre-QB in option (1) at the beginning of the thread, --environment=...
only solves part of the problem, and having to write down the modular pipelines to achieve this reusability is indeed a bit cumbersome.
Similar request from @andrko1 today:
lets say that we have a folder with a date (partition) and I want to access only the specified date, e.g
${root_path}/${date}/cars.csv
, but for${date}
variable I want to change it every time it doesnt work with--params
as it seems to initialize the default parameters first and then replacing the specified values [for example:kedro run 20230627
]
A similar request from @quantumtrope: https://github.com/kedro-org/kedro/discussions/2958 (and also https://linen-slack.kedro.org/t/14164549/i-have-a-question-about-using-kedro-in-a-non-ml-setting-spec#a956426e-30d3-4a01-98b5-a582e3082da6)
Which is similar to this one from @christopherrabotin a while back https://linen-slack.kedro.org/t/14162145/hi-there-what-s-the-best-way-to-run-a-monte-carlo-simulation#48ef7630-854f-4e98-b698-3534f80a05b7
And this one from @bpmeek even earlier https://linen-slack.kedro.org/t/9703489/hey-everyone-i-m-looking-for-the-kedro-way-of-doing-a-monte-#80277f3a-95a8-4578-ae24-f101dc0244f9
To all people subscribed to this issue, notice that @marrrcin has published an interesting approach using
OmegaConfigLoader
with custom resolverssettings.py
Please give it a read https://getindata.com/blog/kedro-dynamic-pipelines/ and let us know what you think.
Today @datajoely recommended @marrrcin's approach as an alternative to Ray Tune for parameter sweep https://linen-slack.kedro.org/t/16014653/hello-very-much-new-to-the-ml-world-i-m-trying-to-setup-a-fr#e111a9d2-188c-4cb3-8a64-37f938ad21ff
Are we confident that the DX offered by this approach can compete with this?
search_space = {
"a": tune.grid_search([0.001, 0.01, 0.1, 1.0]),
"b": tune.choice([1, 2, 3]),
}
tuner = tune.Tuner(objective, param_space=search_space)
Originally posted by @astrojuanlu in https://github.com/kedro-org/kedro/issues/2627#issuecomment-1780858670
No but it's does provide a budget version of it - this is what I'm saying about the lack of sweeper integration with dedicated "sweepers" in this comment
Originally posted by @datajoely in https://github.com/kedro-org/kedro/issues/2627#issuecomment-1780876275
Let's continue the conversation about "parameter sweeping"/experimentation here.
To all people subscribed to this issue, notice that @marrrcin has published an interesting approach using
OmegaConfigLoader
with custom resolvers- Dataset factories
- Modular pipelines with namespaces
- Centralised
settings.py
Please give it a read https://getindata.com/blog/kedro-dynamic-pipelines/ and let us know what you think.
@astrojuanlu thanks for sharing this and for the overall work on connecting everything going on around this feature request. The solution you are sharing seems very promising - although a bit complex also. I'll try to take a deeper look into it asap.
Nice talk on how to do hyperparameter tuning and selection in Flyte https://www.youtube.com/watch?v=UO1gsXuSTzg (key bit starts around 12 mins in)
Originally posted by @datajoely in https://github.com/kedro-org/kedro/issues/2627#issuecomment-1792174901
Optuna + W&B https://colab.research.google.com/drive/1WxLKaJlltThgZyhc7dcZhDQ6cjVQDfil#scrollTo=sHcr30CKybN7
Originally posted by @datajoely in https://github.com/kedro-org/kedro/issues/2627#issuecomment-1794418925
A user that uses different environments https://linen-slack.kedro.org/t/16041288/question-on-environments-and-credentials-we-are-currently-us#49927057-9256-455d-9213-94b898fcb699
we have a lot of params that change depending on the pipeline input so we used the envs concept to parametrise through the cli - works well for us.
Essentially option (1) of the original @Vincent-Liagre-QB ticket. In my opinion this is an abuse of environments but it's what users want: add new config file, change CLI flag, and done.
A user that uses different environments https://linen-slack.kedro.org/t/16041288/question-on-environments-and-credentials-we-are-currently-us#49927057-9256-455d-9213-94b898fcb699
I am that user! Indeed, we have have repurposed envs to act as parameter groups. It works fairly well for us and it's been easy to train new team members on how we use them.
Would love a kedro-native solution though!
PS: For most functionality that is not out of the box for kedro the community tends to recommend hooks. My experience is that large projects can end up with dozens of hooks and each team uses different ones making onboarding difficult. Also, logic that is applied there might appear as side effects to someone not familiar with them so my preference is to use them sparingly. Just one person's opinion :)
@netphantom https://github.com/kedro-org/kedro/discussions/3308
I need to run multiple pipelines with different inputs, so I have configured in my
parameters.yml
something like:neural_network_heads: [100, 200, 300]
I would like that Kedro takes into account one at the time each value, and run 3 pipelines. Using Snakemake, I put the
expand
rule and it took care of it. Is it possible to do the same in Kedro?
"Live replay" of a user attempting the current approach https://github.com/kedro-org/kedro/discussions/3308 useful for future iterations
When showing dataset factories to some users internally:
Can I pass the parameters directly on the CLI instead of creating new namespaces?
Description & context
When working outside
kedro
, I often have several parallel configs for the same script (in kedro terms, "pipeline"), e.g. different model configs for a regression model ; or specific start/end dates and exclusion patterns for an analysis. Tree could look like:And within
model_1.py
, I'd usually do something like:So that I can then easily run different experiments independently with:
python src/model_1.py --conf=experiment_2
(for instance)And I'd usually organize results like this (but that's personal ; point is to make it easily configurable):
Note that:
model_1.py
to be able to run them independently and so that the workflow of adding a conf is streamlessNow I am wondering: how can I easily have a similar workflow in kedro ? What I have though about so far:
Before deep diving into 5: Do you have any other idea? Am I missing something (might very well be the case since I am quite a beginner here)? Am I too biased by my outside-kedro workflow which might not be that straightforward after all?
Possible Implementation
Using the example case of
spaceflights
'data_science
pipelineSimply run:
python src/kedro_tutorial/pipelines/data_science/experiment_run.py --experiment-name="test_experiment"
Where:
src/kedro_tutorial/pipelines/data_science/experiment_run.py
is as below:(Remarks and required changes below)
Remarks:
Required changes in kedro code:
unregistered_ds
inAbstractRunner.run
(vs. onlyfree_outputs
)Possible Alternatives
See points 1/2/3/4 above