Closed JaroslavTulach closed 1 year ago
Pavel Marek reports a new STANDUP for yesterday (2023-07-17):
Progress: - Working on the prototype of JMH benchmarks for Enso libs
Pavel Marek reports a new STANDUP for today (2023-07-18):
Progress: - Discussion about bench API specification - how to specify before
and after
functions? It should be finished by 2023-07-28.
Pavel Marek reports a new STANDUP for yesterday (2023-07-20):
Progress: - The CLI of the custom JMH runner conforms to the standard JMH CLI
Pavel Marek reports a new STANDUP for today (2023-07-21):
Progress: - Troubleshooting the recent GraalVM version update:
sbt
buildPavel Marek reports a new STANDUP for today (2023-07-24):
Progress: - Reverting some incompatible changes that I introduced after Graal Update
bench-libs
and bench-processor
. It should be finished by 2023-07-28.Pavel Marek reports a new STANDUP for today (2023-07-25):
Progress: - Figuring out how to cmdline options for frgaal compiler so that we have a custom class path in the annotation processor.
Pavel Marek reports a new STANDUP for yesterday (2023-07-26):
Progress: - Struggling a bit with the sbt build config again.
bench
and benchOnly
SBT commands. It should be finished by 2023-07-28.Pavel Marek reports a new STANDUP for today (2023-07-27):
Progress: - Finished the prototype, now we can generate all the JMH code for benchmarks, run a single benchmark, and run all the benchmarks
Pavel Marek reports a new STANDUP for today (2023-07-28):
Progress: - Integrating a lot of suggestions from the review.
bench-processor
.Pavel Marek reports a new 🔴 DELAY for today (2023-07-31):
Summary: There is 9 days delay in implementation of the Execute (and analyze) single Bench.measure (#7323) task. It will cause 9 days delay for the delivery of this weekly plan.
Delay Cause: We concluded that we want to finish the whole integration of JMH for stdlib benchmarks generation. In other words, the end goal of this task is to create a new CI job that runs all the std lib benchmarks and collects the data. This is much more involved than a simple ability to run just one benchmark locally.
Pavel Marek reports a new STANDUP for the provided date (2023-08-06):
Progress: - Bumped into yet another AssertionError in Truffle's source section - https://github.com/enso-org/enso/issues/5585
SpecCollector
.
Pavel Marek reports a new STANDUP for today (2023-07-31):
Progress: - Bumped into yet another AssertionError in Truffle's source section - https://github.com/enso-org/enso/issues/5585
SpecCollector
.
Pavel Marek reports a new STANDUP for today (2023-08-01):
Progress: - Added some reasonable annotation parameters specifying benchmark discovery.
Pavel Marek reports a new STANDUP for yesterday (2023-08-02):
Progress: - Integrating many review comments It should be finished by 2023-08-06.
Pavel Marek reports a new STANDUP for today (2023-08-03):
Progress: - Integrating rest of the review comments
Pavel Marek reports a new STANDUP for today (2023-08-04):
Progress: - Some problems with CI - jobs are taking unusually long, cannot merge today. It should be finished by 2023-08-06.
The current Enso benchmarking infrastructure allows to take highlevel view of the benchmark runs, but doesn't allow simple transfer of such benchmarks for detailed analysis via lowlevel tools like IGV. As a result we are working with Enso benchmarks as a blackbox. We know something is slow, but we don't have a way to increase our cluelessness by easily running the same
Bench.measure
benchmark in isolation, with deep insight into the compilation of the single benchmark functionality, without being distracted by the rest.The exploratory work has already been done in #7101. Time to finish it or to create new PR(s) to move us forward. As the discussion at https://github.com/enso-org/enso/pull/7270#discussion_r1266264485 shows, we need to start writing benchmarks in a way that is more lowlevel friendly (as the amount of work to modify original #5067 benchmark was too high to be manually repeated with every benchmark written).
Follow-up tasks