enso-org / enso

Enso Analytics is a self-service data prep and analysis platform designed for data teams.
https://ensoanalytics.com
Apache License 2.0
7.37k stars 323 forks source link

Execute (and analyze) single Bench.measure #7323

Closed JaroslavTulach closed 1 year ago

JaroslavTulach commented 1 year ago

The current Enso benchmarking infrastructure allows to take highlevel view of the benchmark runs, but doesn't allow simple transfer of such benchmarks for detailed analysis via lowlevel tools like IGV. As a result we are working with Enso benchmarks as a blackbox. We know something is slow, but we don't have a way to increase our cluelessness by easily running the same Bench.measure benchmark in isolation, with deep insight into the compilation of the single benchmark functionality, without being distracted by the rest.

### Tasks
- [x] Design API for **declarative definition of Enso benchmarks** - currently [this API](https://github.com/enso-org/enso/pull/7101/commits/88fd6fb9888d939c2d9745a7397f6e87dec4fa4d) seems as an acceptable starting point - see #7324
- [x] Design `sbt` command to run all Enso benchmarks (written using #7324)
- [x] Design `sbt` command to run single benchmark given its name (or other ID)
- [x] Connect with IGV - play nicely with `withDebug --dumpGraphs`
- [ ] Align with `@Warmup(iterations)` concept of JMH - https://github.com/enso-org/enso/pull/7270#discussion_r1266276705

The exploratory work has already been done in #7101. Time to finish it or to create new PR(s) to move us forward. As the discussion at https://github.com/enso-org/enso/pull/7270#discussion_r1266264485 shows, we need to start writing benchmarks in a way that is more lowlevel friendly (as the amount of work to modify original #5067 benchmark was too high to be manually repeated with every benchmark written).

Follow-up tasks

enso-bot[bot] commented 1 year ago

Pavel Marek reports a new STANDUP for yesterday (2023-07-17):

Progress: - Working on the prototype of JMH benchmarks for Enso libs

enso-bot[bot] commented 1 year ago

Pavel Marek reports a new STANDUP for today (2023-07-18):

Progress: - Discussion about bench API specification - how to specify before and after functions? It should be finished by 2023-07-28.

enso-bot[bot] commented 1 year ago

Pavel Marek reports a new STANDUP for yesterday (2023-07-20):

Progress: - The CLI of the custom JMH runner conforms to the standard JMH CLI

enso-bot[bot] commented 1 year ago

Pavel Marek reports a new STANDUP for today (2023-07-21):

Progress: - Troubleshooting the recent GraalVM version update:

enso-bot[bot] commented 1 year ago

Pavel Marek reports a new STANDUP for today (2023-07-24):

Progress: - Reverting some incompatible changes that I introduced after Graal Update

enso-bot[bot] commented 1 year ago

Pavel Marek reports a new STANDUP for today (2023-07-25):

Progress: - Figuring out how to cmdline options for frgaal compiler so that we have a custom class path in the annotation processor.

enso-bot[bot] commented 1 year ago

Pavel Marek reports a new STANDUP for yesterday (2023-07-26):

Progress: - Struggling a bit with the sbt build config again.

enso-bot[bot] commented 1 year ago

Pavel Marek reports a new STANDUP for today (2023-07-27):

Progress: - Finished the prototype, now we can generate all the JMH code for benchmarks, run a single benchmark, and run all the benchmarks

enso-bot[bot] commented 1 year ago

Pavel Marek reports a new STANDUP for today (2023-07-28):

Progress: - Integrating a lot of suggestions from the review.

enso-bot[bot] commented 1 year ago

Pavel Marek reports a new 🔴 DELAY for today (2023-07-31):

Summary: There is 9 days delay in implementation of the Execute (and analyze) single Bench.measure (#7323) task. It will cause 9 days delay for the delivery of this weekly plan.

Delay Cause: We concluded that we want to finish the whole integration of JMH for stdlib benchmarks generation. In other words, the end goal of this task is to create a new CI job that runs all the std lib benchmarks and collects the data. This is much more involved than a simple ability to run just one benchmark locally.

enso-bot[bot] commented 1 year ago

Pavel Marek reports a new STANDUP for the provided date (2023-08-06):

Progress: - Bumped into yet another AssertionError in Truffle's source section - https://github.com/enso-org/enso/issues/5585

enso-bot[bot] commented 1 year ago

Pavel Marek reports a new STANDUP for today (2023-07-31):

Progress: - Bumped into yet another AssertionError in Truffle's source section - https://github.com/enso-org/enso/issues/5585

enso-bot[bot] commented 1 year ago

Pavel Marek reports a new STANDUP for today (2023-08-01):

Progress: - Added some reasonable annotation parameters specifying benchmark discovery.

enso-bot[bot] commented 1 year ago

Pavel Marek reports a new STANDUP for yesterday (2023-08-02):

Progress: - Integrating many review comments It should be finished by 2023-08-06.

enso-bot[bot] commented 1 year ago

Pavel Marek reports a new STANDUP for today (2023-08-03):

Progress: - Integrating rest of the review comments

enso-bot[bot] commented 1 year ago

Pavel Marek reports a new STANDUP for today (2023-08-04):

Progress: - Some problems with CI - jobs are taking unusually long, cannot merge today. It should be finished by 2023-08-06.