Closed mle-els closed 5 months ago
Hi @mle-els, thanks for your suggestion. It's cool to see that it only took you ~60 lines of code to customize Kedro to your needs!
However, I don't see how your PipelineBuilder
interacts with the Kedro catalog. Looks like the first inputs of pb.call
are datasets, but then intermediate results are stored in variables, am I right?
I noticed this in your README:
Because datasets are now variables that are returned and passed to another function, typos can be easily detected.
Could you describe this problem in a bit more detail?
Hi @astrojuanlu, thanks for your response. PipelineBuilder
uses the name of the local variables as the name of the corresponding dataset (hence the use of locals()
). If a dataset doesn't have a variable name or we don't pass locals()
when invoking build()
, a name will be automatically generated, such as dataset1
.
The bit about typos: when dataset names are strings, IDEs can't check that they actually match. I ran into this problem before when some code changes lead to the wrong dataset being used. When dataset names are variable names, this kind of issues will be caught sooner.
Thanks @mle-els. If I understand correctly there are two issues at hand:
name
if one is not given instead of setting it to None
, so that most users don't need to specify the name
(we save 1 line per node).What do you think of this assessment?
Linking this kedro-lsp
for the VSCode Extension, I also vaguely remember a conversation that we may get a team to make a vscode extension for kedro.
Hi @astrojuanlu, I thought of the decorator syntax but didn't find it satisfactory because of 2 reasons:
About VSCode extension, not everyone uses VSCode. I think an alternative API will apply more broadly. Plus, installing an extension is an extra step added to the learning curve.
I think the most important feature of the proposed builder is that it looks just like normal Python code. Everyone is familiar with how to write and comprehend a Python function. By being friendly and intuitive, you'll make it more attractive for new users.
Clarification on "VSCode extension": the Language Server Protocol (LSP) is now understood by VSCode, vim, JupyterLab, and others. So kedro-lsp is not exclusive to VSCode. Looks like the only major editor missing LSP support is the IntelliJ family.
Having said that, I sympathize with not tying the Kedro DX to a specific family of editors, and exploring alternative APIs that make inspection easy. However, the PipelineBuilder
API you propose is quite different from the way things are usually done in Kedro, so it's not something we'd start working on without giving it a lot of thought. I personally don't like having to use locals()
to consolidate all inputs (could be error prone if there are other intermediate variables) and having to repeat pb.call
for every node. Also, this creates an API in which some inputs are strings that get parsed from the catalog and some inputs are variables.
Just to set the expectations clear, although this touches adjacent issues that are already in our radar, we have limited resources and I don't think we're going to look into this particular API any time soon.
HI @astrojuanlu, here is an alternative design that I thought of but it's harder to pull off so I went for pb.call()
instead. If you think it's worth considering then I can draft some code.
@kedro_function
def func1(x) -> Any:
...
@kedro_function
def func2(y) -> Any:
...
def create_pipeline():
with kedro_pipeline_builder() as pb:
y = func1('input', output_names='dataset_y')
out = func2(y, output_names='dataset_z')
return pb.build()
We could also wrap datasets and params into variables like pb.dataset('input')
and pb.param('param')
but I don't think it adds to readability. Using a mixture of strings and variables looks fine to me.
actually the need for another tool (IDE-extension/kedro-lsp) makes it quite obvious that kedro's pipeline creation api has (very) low usability
the main issue lies within kedro's approach to construct DAGs (deeply nested things) via lists (flat things) of nodes that are stitched together via string-literals (quite "unpythonic")
here an example from the kedro-docs:
variance_pipeline = pipeline(
[ # flat list makes it hard to "see" the DAG
node(len, "xs", "n"), # stitched together by strings encoding names of input/output variables
node(mean, ["xs", "n"], "m", name="mean_node"),
node(mean_sos, ["xs", "n"], "m2", name="mean_sos"),
node(variance, ["m", "m2"], "v", name="variance_node"),
]
)
why not "simply" use nested function-calls like dask does it?
# "flat" -> one node per statement
n = len(x)
m = mean(x, n)
m2 = mean_sos(x, n)
v = variance(m, m2)
n = len(x) v = variance( m=mean(xs=x, n=n), m2=mean_sos(xs=x, n=n) )
- if one does not like to imagine DAGs from python code one could simply use mermaid graphs
```mermaid
graph TD
a[x]-->|*arg|len[len]
len-->|n|mean[mean]
a-->|xs|mean[mean]
len-->|n|sos[mean_sos]
a-->|xs|sos[mean_sos]
sos-->|m2|var[variance]
mean-->|m|var[variance]
Hi @dertilo , thanks for chiming in.
Your proposal suffers from the same problem as the original one:
n = len(x)
v = variance(
m=mean(xs=x, n=n),
m2=mean_sos(xs=x, n=n)
)
where does x
come from?
@dataclass class SomeDatum: some_id:str data: WhateverData # text,image,video,array,tensor, who-cares more_data: FooBar
@dataclass class SomeInputNode(Iterable[SomeDatum]): """ this could be an input-node to some data-processing DAG """ name:str
def __iter__(self) -> Iterator[SomeDatum]:
# some data-loading code here
* `x=SomeInputNode("foobar")`
```mermaid
graph TD
a[SomeInputNode]-->|x|b[some DAG here]
Thanks @dertilo - then for the "free inputs" one would still need to use Python strings.
One of the nice things of the current design is that free inputs and intermediate results are all treated the same, which gives the pipeline definition a uniform aspect.
Another useful thing is that the "wrapping" of the nodes happens outside of the node definition. This was discussed in #2471 - by having decorators like other frameworks and SDKs, the nodes are coupled with the framework in question. In Kedro, nodes can be reused because they are regular, pure functions that do no I/O (see https://sans-io.readthedocs.io/, https://rhodesmill.org/brandon/slides/2015-05-pywaw/hoist/).
the main issue lies within kedro's approach to construct DAGs (deeply nested things) via lists (flat things) of nodes
Mathematically, a graph is a set of nodes $G = (V, E)$.
that are stitched together via string-literals (quite "unpythonic")
At some point the user needs to specify the names of the datasets, either for everything (current design) or for the free inputs (proposed designs). Dataset names are no different from dictionary keys that represent something else - with the caveat that, yes, such keys are not immediately available in the IDE.
Hi! @mle-els in case it helps, we released a VSCode extension that makes navigating between the Python and YAML files from a Kedro project a breeze https://marketplace.visualstudio.com/items?itemName=kedro.Kedro
Turning this issue into a Discussion to continue the conversation there
Description
I doesn't like the way pipelines are specified. It's verbose and hard to read. When looking for workflow software, I almost skipped Kedro because the pipeline code is so ugly.
Context
A better way of building pipelines would make my code more pleasant to write and read. It should help other people too and help Kedro recruit more users.
Possible Implementation
UPDATE: THIS IS A NEW PROPOSAL:
END UPDATE
I have made a simple implementation of a pipeline builder, modeling after the
spaceflights
example. The pipeline can be built like this which works exactly the same way as the original pipeline:Executable code can be found in https://github.com/mle-els/kedro-pipeline-builder