Open wence- opened 1 month ago
How does it perform if you turn off comm_subexpr_elim
?
And how do you get so deep expressions? 😉
How does it perform if you turn off
comm_subexpr_elim
?
Same, indeed with all optimisations off it's still quadratic, just faster overall:
for i in 1 200 400 800 1600; do python slow-optimise.py ${i}; done
1: 0.0027743840000766795
200: 0.010377078000601614
400: 0.030803929999819957
800: 0.12257709299956332
1600: 0.47342776200093795
And how do you get so deep expressions? 😉
I've seen things...
It was more that I was transpiling the plan and noticed performance slowdowns when annotating the logical plan with dtypes for every expression node.
Will see if we can apply some memoization in the to_field
call.
I am not entirely sure the tree itself isn't quadratic. I need to do double the to_field
calls of the depth. Because at every depth, we branch.
I don't think so. The expression is (for $N = 4$)
((((a + a) + a) + a) + a)
Or
(+)
| \
(+) a
| \
(+) a
| \
(+) a
| \
a a
So if you need to call to_field
on every node, that's $2 N - 1$ calls, which is why you're getting double the depth, I think. But at every node, to_field
needs to recurse into the subtree if we're not memoising, or otherwise doing a one-pass bottom-up production of the dtypes of every node given a schema context.
BTW: should I expect that the AExpr processing treats expressions with common sub-expressions like DAGs, or like trees (increasing the perceived size?).
That is, if I write:
expr = pl.col("a")
expr = expr + expr
expr2 = expr + expr
Which is:
(+)
/ \
\ /
(+)
/ \
\ /
a
Is it "seen" as:
(+)
/ \
(+) (+)
/ | | \
a a a a
It is seen as trees. They are turned into DAGS during execution if CSE recognized it.
It is seen as trees. They are turned into DAGS during execution if CSE recognized it.
Thanks. I suspect that means there are (perhaps pathological) cases where expression processing time can be exponential: any time there is sharing in the expression. Though perhaps that is an unlikely case for typical queries.
Checks
Reproducible example
Log output
Issue description
Optimising, I think, the logical plan has performance that is quadratic in the expression depth. I think the culprit is
AExpr::to_field
which is called to attach a dtype at every node in the graph. But since the result is not cached, this is O(N) for a node, and done O(N) times.Running
samply python run.py 1600
shows almost all the time is inget_arithmetic_field -> to_field -> stacker::maybe_grow -> get_arithmetic_field -> ...
Expected behavior
Although this is somewhat pathological, I would expect this to be linear in the expression depth.
Installed versions