cmu-db / noisepage

Self-Driving Database Management System from Carnegie Mellon University
https://noise.page
MIT License
1.74k stars 502 forks source link

Solidify benchmarking infrastructure and have reference numbers #113

Closed tli2 closed 6 years ago

tli2 commented 6 years ago

Once we have a dedicated benchmark machines we should start formalizing a suite we want to run for new PRs, as well as numbers we think that would make sense. (TAS performance, DataTable throughput, and others)

lmwnshn commented 6 years ago

On a related note, per discussion with @pervazea I've been looking into Github apps, particularly the checks API. The goal is to have it report performance increase/regression on every Github PR.

pervazea commented 6 years ago

This comment is about item 1. Related issues, with respect to infrastructure, are 2-4.

  1. Microbenchmarks
  2. Macro benchmarks (e.g. OLTPBench)
  3. Long running stress tests
  4. Fuzz tests

The proposed flow, for a PR will be:

  1. Travis build. If successful, triggers the next step
  2. Jenkins build, which executes the following in parallel
    1. The "complete" set of unit tests. These will be run via Docker.
    2. Microbenchmarks. Run on dedicated benchmark system(s). Like the tests, they will report pass / fail (in addition to actual numbers).

Results from Travis, Jenkins tests and microbenchmarks should be available via Github, so we have a usable, consistent approach.

Implementation issues to be worked out:

tli2 commented 6 years ago

Blocked on benchmark machine