h2oai / db-benchmark

reproducible benchmark of database-like ops
https://h2oai.github.io/db-benchmark
Mozilla Public License 2.0
322 stars 85 forks source link

use on-disk data storage if OOM happens #126

Closed jangorecki closed 4 years ago

jangorecki commented 4 years ago

Solutions that runs OOM and are capable to use on-disk storage should use it. AFAIU it is now possible for pydatatable, spark and dask.

jangorecki commented 4 years ago

Timeout for join has been increased from 60 to 120 minutes due to the much longer processing time for newly added spark 1e9 join that uses on-disk data storage. I played a little with spark memory limit, timing of 1e9 join

 90G -  9h
100G -  9h
110G - 14h
120G - spark crash: "There is insufficient memory for the Java Runtime Environment to continue."
jangorecki commented 4 years ago

Timeout has to be increase much further. Looking at data.table and spark and groupby (1e7, k=100) vs join, latter seems to be taking 2-8x longer. This is likely caused by loading data. Groupby requires load 45gb once, while join requires to load 55gb x2. Due to those data sizes groupby still can be computed in memory, but join needs on-disk data storage, this contributes to longer computation time even more. To reduce the total amount of time that benchmark will be spending on join task, we can make timeout parameter granular for different data size. So 1e7 could have 30 minutes, 1e8 could have 2h (both should fit into memory), and 1e9 8h (on disk). Then at least we won't wait 8h on some slow solution trying to solve 1e7 size.

jangorecki commented 4 years ago

All 4 scripts for 3 solutions has been implemented and are already reflected on benchplot.

I am leaving this issue open because we should also carry out information about on-disk/in-memory to benchplot.

jangorecki commented 4 years ago

It is now marked on benchplot with an * suffix, related note is added below the plot in the report. This issue is resolved. Note that there is a related one to use RAM memory when OOVM (video memory) happens for cudf #116.

jangorecki commented 3 years ago

There seems to be a regression in performance for dask 2.30 because it is no longer able to compute out-of-memory groupby questions using parquet format, going to revert to in-memory format to reduce maintenance and size of data files as no other solution uses parquet. Ideally we want to replace parquet with arrow, which can be re-used by more solutions.