duckdblabs / db-benchmark

reproducible benchmark of database-like ops
https://duckdblabs.github.io/db-benchmark/
Mozilla Public License 2.0
143 stars 27 forks source link

rolling functions #9

Open jangorecki opened 1 year ago

jangorecki commented 1 year ago

draft version for rolling functions requested in #6

we can define scope for those tests here. Some are already there, others only mentioned in comments


PR roadmap:

optionally: test scripts and validate results:

rolling functions not available in:

jangorecki commented 1 year ago

As there was no feedback on the scope proposed by me in April, I made another step forward and defined it more precisely. Please have a look at the questions proposed. It is not easy, within 10 questions, to cover well all the possible features, so I ended up focusing on:

q1: rolling mean
q2: window small
q3: window big
q4: multi vars+cols
q5: median
q6: weighted
q7: uneven dense
q8: uneven sparse
q9: regression (by.column)
q10: UDF (one that will generally not be optimized, to mimic arbitrary UDF)

Looking forward to feedback on the scope for that test, or implementations in other software. Note that we may be forced to move from N {1e6, 1e7, 1e8} to N {1e5, 1e6, 1e7} depending on how well other tools will scale.

jangorecki commented 1 year ago
jangorecki commented 1 year ago

Instead of q10 UDF I would propose either:

From two options above I am in favor of min.

Posting here before doing the change as I hope there may be some other ideas.

edit: amended in https://github.com/duckdblabs/db-benchmark/pull/9/commits/045b7d5729b4f7c5b705cd647e6ebfde631afcf8

jangorecki commented 1 year ago

Other topics that are subject to community review are:

  1. data sizes, either

    • 1e5, 1e6, 1e7 (could be desired to run slower tools)
    • 1e6, 1e7, 1e8 (currently implemented, feels fine)
    • 1e7, 1e8, 1e9 (if we drop UDF in q10 this should be do-able)
    • 1e5, 1e7, 1e9 (mix of all)
  2. window size

    w = nrow(x)/1e3       ## used in 8 out of 10 questions
    wsmall = nrow(x)/1e4  ## used q2
    wbig = nrow(x)/1e2    ## used q3

    In case of 1e9 data size, window size would be 1e6, which feels unrealistically big. I feel we could improve window sizes.

  3. unevenly ordered series (q7, q8 or in recent HEAD q8, q9)

    DT[["id2"]] = sort(sample(N*1.1, N))         ## index dense
    DT[["id3"]] = sort(sample(N*2, N))           ## index sparse

    dense index is 110% range of nrow. sparse index is 200% range of nrow. I am not sure if we are stressing well enough the sparse scenario. It could be even 1000% range of nrow. Then the problem is that we cannot easily use 1e9 data size, because index would be in range 1 to 1e10 and many tools would be excluded (maybe its fine?). Using 200% range of nrow, still fits into int32 type.

AdrianAntico commented 1 year ago

@jangorecki you have a solid set of measures, the only other type I'd consider would be differencing

era127 commented 1 year ago

I think it would be helpful if the dplyr test was documented as a representation of the slider package, as opposed to using RcppRoll. The dplyr benchmark has been confusing to me because you can use dplyr with duckdb or data.table. For data.table, it is more obvious that you are benchmarking the rolling functions inside the data.table package.

jangorecki commented 1 year ago

@AdrianAntico that is an interesting idea, but as I briefly looked at potential implementation, it doesn't seem to stress windowing computation (use of diff in R). If you could provide an example of code then it will be more clear.

@rdavis120 maintaining new solutions is relatively high cost, putting slider under dplyr was easy way to avoid that. Rather than adding new solution slider I would prefer to rename dplyr to tidyverse so it will fit well and there will be no need for adding another solution. Anyway I would prefer to keep this issue discussion around rolling task scope rather naming details.

AdrianAntico commented 1 year ago

@jangorecki the intent was more time series related (and cross-row related). I have a diff function in my github package "Rodeo" if you want a full example, starting at line 443... https://github.com/AdrianAntico/Rodeo/blob/main/R/FeatureEngineering_CrossRowOperations.R

jangorecki commented 1 year ago

@Tmonster could we get CI workflow approval?

jangorecki commented 1 year ago

@Tmonster Is there a way that I could be approving CI runs? does it run on duckdblabs private runners? if not then I don't think there should be any concerns.

I added pandas rollfun script, and (hopefully) fixed failure in previous GH Actions job.

Tmonster commented 1 year ago

@Tmonster Is there a way that I could be approving CI runs? does it run on duckdblabs private runners? if not then I don't think there should be any concerns.

I added pandas rollfun script, and (hopefully) fixed failure in previous GH Actions job.

Yea, I thought the CI would run after every push once I approve it the first time. I think I don't have the permissions to enable that yet. Will talk to mark about it tomorrow 👍

bkamins commented 1 year ago

@jangorecki I will propose code for juliadf.

bkamins commented 1 year ago

@jangorecki - which is the reference implementation of all the questions from Q1 to Q10 (I am asking, because I have checked several solutions and they indicate "not implemented yet" and I do not know how to exactly reproduce them in Julia)

In particular I am not clear what frollreg(list(x$v1, x$v2), w)) should mean in data.table. You input two vectors and want to do a regression on them, but it seems strange for two reasons:

Also I noticed the comment:

## Killed, UDF simply does not scale, needs to be specialized fun

So the question is if you allow specialized functions or the opposite - you do not allow them and accept that the process does not produce the result (this was the approach in your earlier benchmarks - you wanted to check what is the performance of "out of the box" solutions without writing custom code tuned for performance).

jangorecki commented 1 year ago

@bkamins rolling regression is only now available for duckdb for now. frollreg will most likely never exist as there are already nice implementations of rolling regression in other R packages. As for the design of q10 I would lean toward finding most popular rolling regression question on stackoverflow and aligning to it. It is quite frequently requested functionality, that's why I decided to have it in scope of this task.

Doing rolling regression with frollapply(lm, by.column=F) (or UDF in any other solution) is possible but will be 100-1000 times slower than a specialized version, therefore IMO doesn't make sense to include rolling regression via generic UDF interfaces. So for rolling regression we want only specialized funs. An (unoptimized) UDF question (initially proposed) went out of scope due to terrible scaling.

bkamins commented 1 year ago

But then - back to my original question => which is the reference implementation of the questions I should match the results against in Julia implementation? (in particular - do you want to include constant term in the regression and what should be returned from the operation)

Thank you!

jangorecki commented 1 year ago

That haven't been settled yet. I need to go through stackoverflow to find common questions about it. Actually it would be helpful if you could propose one which you believe is the common problem that users are looking to solve with rolling regression.

bkamins commented 1 year ago

Incidentally, just today I saw an example from a user. The user wanted to run y ~ x kind of regression and keep the result as a vector of collections: Something like:

3-element Vector{Vector{Float64}}:
 [0.11728235062958436, 0.9228342578148421]
 [0.2160138268973083, 0.41776928538024183]
 [0.42587771406039454, 0.10348333203334836]

(so both intercept and slope are kept in a single entry)

jangorecki commented 1 year ago

What I did for duckdb q10 for now is r^2, because this is what we used in groupby q9

jangorecki commented 1 year ago

duckdb, spark, pandas q8 q9 only - do not have an option for handling properly an incomplete rolling window. Timings for those solutions will not include required postprocessing to match exactly same result (NULL vs value from an unexpected window size) as the overhead would be too big.

jangorecki commented 1 year ago

Development is possibly finished on this branch. 5 solutions added till now have been validated using https://github.com/jangorecki/db-benchmark/blob/rollfun/_utils/rollfun-ans-validation.txt

@bkamins if you would like to add Julia, you are welcome, please use commands from file linked above to validate answers against one of the solutions.

Once I will confirm report is producing fine (after running whole rollfun bench) then PR will be ready to merge.

jangorecki commented 1 year ago

@Tmonster PR is ready to merge

To reproduce

# install R and python

git clone https://github.com/jangorecki/db-benchmark --branch rollfun --single-branch --depth 1
cd db-benchmark
# install solutions interactively
./dplyr/setup-dplyr.sh
./datatable/setup-datatable.sh
./pandas/setup-pandas.sh
./duckdb-latest/setup-duckdb-latest.sh
./spark/setup-spark.sh

# prepare data
Rscript _data/rollfun-datagen.R 1e6 NA 0 1
Rscript _data/rollfun-datagen.R 1e7 NA 0 1
Rscript _data/rollfun-datagen.R 1e8 NA 0 1
mkdir data
mv R1_1e*_NA_0_1.csv data

vim run.conf
# do_upgrade false
# force run true
# report false
# publish false
# run_task rollfun
# run_solution data.table dplyr pandas duckdb-latest spark

sudo swapoff -a

# workaround for #30: `=~` matching in run.sh causes "duckdb-latest" to match "duckdb"
vim run.sh
# comment out 76 line in rush.sh: if [[ "$RUN_SOLUTIONS" =~ "duckdb" ]]; then ./duckdb/ver-duckdb.sh; fi; 

./run.sh > ./run.out
jangorecki commented 11 months ago

@Tmonster any idea if PR can make it to master before scheduled September's run?