marsupialtail / quokka

Making data lake work for time series
https://marsupialtail.github.io/quokka/
Apache License 2.0
1.1k stars 60 forks source link

join order and CBO meta-thread #11

Closed marsupialtail closed 1 year ago

marsupialtail commented 1 year ago

Currently the Quokka execution plan should support CBO and join ordering. This is a great issue for someone who wants to impress a Snowflake or Redshift interviewer! You can say you implemented a CBO join optimizer almost from scratch haha.

What needs to be done:

Currently Quokka's optimizer implements predicate pushdown early projection and map fusion.

After these three steps, most of the nodes in the system should be MapNodes and StatefulNodes (aka joins). Some of those joins might already be broadcast joins with materialized arguments, that's okay.

1) Join Fusion. We are now going to replace trees of join nodes with join nodes with multiple operands. This will greatly simplify the tree and expose different choice of joins. We will do predicate pull up from the individual join nodes. This should be a pretty straightforward optimization pass.

2) Cardinality Estimation. Input nodes now should have all the relevant early projection and pushed down filter, and can estimate how much data will be produced based on this. We are going to estimate the number of unique values for each column as well as the total expected size of the output from this node. This estimate will be pushed up the tree as far as possible. If the estimated size is smaller than some threshold, the entire input will be materialized as a constant variable in the query plan. The relevant joins will be converted to broadcast joins at this stage. This should also be a pretty straightforward pass on the graph. The hardest part will be to implement the actual CBO that answers the cardinality estimates given a predicate.

3) Join order selection. The join node will now figure out the order in which to do the joins. If some of its inputs are materialized it will join them in the main thread first. We are going to restrict ourselves to left deep plans and enumerate all n! possible plans. We will use all the nice independence assumptions to compute the join cost of each ordering. If some of the tables have already been materialized, they will be joined first to produce bigger/smaller materialized tables. This could potentially lead to a size blowup, which we don't address.

marsupialtail commented 1 year ago

Check staged-execution branch