pola-rs / polars

Dataframes powered by a multithreaded, vectorized query engine, written in Rust
https://docs.pola.rs
Other
30.23k stars 1.95k forks source link

Elementwise check on `join` expressions is too restrictive #17895

Open Oreilles opened 3 months ago

Oreilles commented 3 months ago

Checks

Reproducible example

All following expressions results in InvalidOperationError: All join key expressions must be elementwise.

df = pl.DataFrame({"a": [1,2,3,4]})

df.join(df, left_on="a", right_on=pl.int_range(pl.len()))
df.join(df, left_on="a", right_on=pl.col("a") * pl.len())
df.join(df, left_on="a", right_on=pl.col("a").cum_count())
df.join(df, left_on="a", right_on=pl.col("a").shift());

Log output

No response

Issue description

Since https://github.com/pola-rs/polars/pull/17517, the expressions allowed in join clauses have a check in place to ensure that all rows in both dataframes will be assigned a value. This check is made with the function is_streamable. But this checks is currently too restrictive, marking as invalid expressions that should be.

This is especially inconvenient because it makes it impossible to use the implicit row index of a dataframe in those expressions, forcing us to add it with with_row_index() and drop it at the end with .drop("index").

Expected behavior

The check should better discriminate between valid and invalid expressions.

Installed versions

``` --------Version info--------- Polars: 1.2.1 Index type: UInt32 Platform: macOS-14.5-arm64-arm-64bit Python: 3.12.4 (main, Jun 6 2024, 18:26:44) [Clang 15.0.0 (clang-1500.3.9.4)] ----Optional dependencies---- adbc_driver_manager: cloudpickle: connectorx: deltalake: fastexcel: fsspec: gevent: great_tables: hvplot: 0.10.0 matplotlib: 3.9.1 nest_asyncio: 1.6.0 numpy: 2.0.0 openpyxl: 3.1.5 pandas: 2.2.2 pyarrow: 17.0.0 pydantic: 2.8.2 pyiceberg: sqlalchemy: torch: xlsx2csv: xlsxwriter: ```
wence- commented 3 months ago

I think this is a good point, and one could do a tighter job of classifying expressions.

Before embarking on this for the purpose of join keys, we should decide what "valid" join keys look like.

Your example shows four different styles of expression for the right join key. These are not trivial to classify correctly. Note that pl.len() is a reduction for which we need to know that the result will symbolically produce something that, when used, will produce a correct-length join key. Consider with a different reduction pl.col("a").count(). This happens to produce the same value as pl.len() iff a contains no nulls. Both would be usable in the case a * reduced_value (your second example). But only len is guaranteed usable for your first example.

Oreilles commented 3 months ago

I just found out that joining with a single literal expression is valid, but don't really makes sense. It kind of would if the litteral was applied to all rows in the right DataFrame but it ends up being only applied to the first one.

df = pl.DataFrame({"a": [1,2,2]})
df.join(df, left_on="a", right_on=pl.lit(1, dtype=pl.Int64))
# ┌─────┬─────────┐
# │ a   ┆ a_right │
# │ --- ┆ ---     │
# │ i64 ┆ i64     │
# ╞═════╪═════════╡
# │ 1   ┆ 1       │
# └─────┴─────────┘

df.join(df, left_on="a", right_on=pl.lit(2, dtype=pl.Int64))
# ┌─────┬─────────┐
# │ a   ┆ a_right │
# │ --- ┆ ---     │
# │ i64 ┆ i64     │
# ╞═════╪═════════╡
# │ 2   ┆ 1       │
# │ 2   ┆ 1       │
# └─────┴─────────┘

Does this deserve another bug report?

wence- commented 3 months ago

That was part of the discussion in https://github.com/pola-rs/polars/issues/17184 (third point). I think