h2oai / db-benchmark

reproducible benchmark of database-like ops
https://h2oai.github.io/db-benchmark
Mozilla Public License 2.0
324 stars 87 forks source link

Need more flexible task scheduler #52

Closed st-pasha closed 5 years ago

st-pasha commented 5 years ago

The benchmark suite grows both in depth and in width. In the near future we will not be able to afford the "run everything" strategy anymore. Limiting the number of tasks that are run every day will be a necessity. Approaches such as #50 are a good starting point, but that may not be enough.

I propose to implement a new system, where at the beginning of the benchmarking session each task will be assigned a "worth" score. The tasks can then be sorted by that score, and only the top ones run (until the benchmarking server runs out of time).

The function that evaluates the worthiness of each task will be dynamic, taking into account many factors:

mattdowle commented 5 years ago

This proposal, although good in a perfect world, seems overly complicated for the resources we have right now. Putting the slow products onto a weekly schedule and keeping the fast products running daily is simpler and would be more than adequate for the time being. In any case I'm not sure what #50's proposal actually is. Which is why I had already reopened it and asked for clarification.

jangorecki commented 5 years ago

Idea sounds interesting but at the present moment there is enough of maintenance around various exceptions in different tool so that growth is actually blocked but that. I even stopped reporting issues to upstream projects due to lack of time to produce isolated reproducible examples. Once all exceptions will be handled I think it will be running smoothly. Most of tools are not on devel versions (as of now) so they won't run often anyway. Once we will have join and maybe a next task then we will probably need to prioritize tests.

jangorecki commented 5 years ago

Benchmark is not being run often these days. Once a week or once every two weeks. The reason for that is that most of the tools are not being run anyway. It is because we are using their release version, not development. And this is caused by the fact they don't publish development releases, nightly snapshots, etc. Pulling master branch, building from source and using that is not really a proper way to go (sometimes even discouraged by devs of a solution). If developers of a tool are not publishing their development builds then we could assume they have good reasons. Maybe it is not ready to be used, might not even build, might contain security issues, whatever. We currently build from sources pydt and dplyr, hope to switch to nightly releases soon. In my opinion with the current load of benchmark we can safely close this issue, as the time is not a problem after implementing #50.