Open jernejfrank opened 1 week ago
I left with_columns
in h_pandas
for backwards compatibility and have something similar in h_polars
to be consistent, but IMO we should deprecate h_pandas
(super short lived plugin lol), remove it from h_polars
and keep it central in recursive.py
.
Ok, so here is my line of thinking with regards to the changes:
The with_columns
consists of three parts:
pass_datafame_as
is used or we need to extract columns into nodes if columns_to_pass
is used. Given that some data frames are supported in Hamilton's extract_columns
and some are not, this should be implemented on a per library basis.subdag
functionality, but some libraries again will need more (see h_spark) and is therefore again to be implemented on a per library basis.So what I decided is to leave three abstract methods:
get_initial_nodes
get_subdag_nodes
create_merge_node
that should create enough flexibility to implement any dataframe library, but is also concrete enough to wire together everything in inject_nodes
from NodeInjector
.
Now, every plugin library, h_pandas
, h_polars
, and h_polars_lazyframe
inherits from this class and in the their initialisation calls out to the parent factory init, but passes in the required dataframe type (e.g. pd.DataFrame
, pl.DataFrame
, or pl.LazyFrame
) which is in turn derived from the extension modules. So in effect we use the registry
approach without hard-binding us to needing to implementat any functionality in there.
Since that part of the API is private, should we want to switch to registry, the refactoring is straightforward and shouldn't get us into trouble down the road.
Ok, so here is my line of thinking with regards to the changes:
The
with_columns
consists of three parts:
- We need the input node. This can be a full dataframe if
pass_datafame_as
is used or we need to extract columns into nodes ifcolumns_to_pass
is used. Given that some data frames are supported in Hamilton'sextract_columns
and some are not, this should be implemented on a per library basis.- We need the subtag nodes. Again, we can re-use Hamilton's
subdag
functionality, but some libraries again will need more (see h_spark) and is therefore again to be implemented on a per library basis.- Last is combining eveything into a single dataframe again to be implemented on a per library basis.
So what I decided is to leave three abstract methods:
get_initial_nodes
get_subdag_nodes
create_merge_node
that should create enough flexibility to implement any dataframe library, but is also concrete enough to wire together everything in
inject_nodes
fromNodeInjector
.Now, every plugin library,
h_pandas
,h_polars
, andh_polars_lazyframe
inherits from this class and in the their initialisation calls out to the parent factory init, but passes in the required dataframe type (e.g.pd.DataFrame
,pl.DataFrame
, orpl.LazyFrame
) which is in turn derived from the extension modules. So in effect we use theregistry
approach without hard-binding us to needing to implementat any functionality in there.Since that part of the API is private, should we want to switch to registry, the refactoring is straightforward and shouldn't get us into trouble down the road.
Nice, I think this is a good overview. Note there might still be shared stuff between the implementations, in which case you have two options to reduce duplicated code (should you want):
But I think these will be options for later, and quite possibly over-engineered.
To be clear, this doesn't look like it works with spark, yet? Do you think that's a possibility?
Please let me know if scope creep too big and I can cut some things out into a new PR.
Changes
registry
single dispatch method and enabled it for Pandas / Polars for now.How I tested this
Notes
The one thing I didn't touch is the spark extension (zero experience with pyspark), because the implementation so far is different / not sure if we can use extract_columns as straightforward as with pandas/polars, but happy to tackle that as well in case its possible.
Checklist