haskellfoundation / hs-opt-handbook.github.io

The Haskell Optimization Handbook
https://haskell.foundation/hs-opt-handbook.github.io/
Creative Commons Attribution 4.0 International
173 stars 12 forks source link

Criterion, Gauge, Tasty-bench chapter #33

Open doyougnu opened 2 years ago

doyougnu commented 2 years ago

combine these because of similar api

Bodigrim commented 1 year ago

The basic API is roughly similar, but capabilities provided are very different. I could probably contribute a chapter on tasty-bench usage, if this is of any interest.

doyougnu commented 1 year ago

Hi @Bodigrim. Yes that would be great but pleas wait for a little while. David Christiansen and I are in the process of moving this repository to the HF github and so I want to keep it as simple as possible until that happens. Should be within the next two weeks, I'll ping you when its done.

doyougnu commented 1 year ago

@Bodigrim several months later the migration is stuck trying to pass some of IOGs internal checks. So I think its better to not wait, would you still like to contribute the chapter?

Bodigrim commented 1 year ago

If your plan is to describe several benchmarking libraries, we need to agree on the structure.

I can submit Chapter 2.5.1 on tasty-bench and then further potential contributors may elaborate about criterion / gauge differences in a hypothetical Chapter 2.5.2. Or we can do it vice versa, but in such case I obviously would not be able to start before a description of criterion / gauge is done. I'm in favor of the former option, are you comfortable with it?

Besides that, I'm not a native English speaker. Would you be happy to proof-read?

doyougnu commented 1 year ago

I can submit Chapter 2.5.1 on tasty-bench and then further potential contributors may elaborate about criterion / gauge differences in a hypothetical Chapter 2.5.2.

Yes this is a good plan! What I had in mind for the chapter was not to repeat the tutorial for the library but to instead describe a real use case. So for example, start with the library that you'll use tasty-bench to benchmark, describe a bit about the library (like where do we expect the heavy computation to be, which code should we even be benchmarking?), and then describe how the benchmark suite was implemented using tasty-bench. The reason I envision the chapter this way is because it starts where the audience is, i.e, they are looking to use tasty-bench to benchmark something and want a real-world example, not just benchmarking fib.

Besides that, I'm not a native English speaker. Would you be happy to proof-read?

Absolutely! :)

Bodigrim commented 1 year ago

What I had in mind for the chapter was not to repeat the tutorial for the library but to instead describe a real use case. So for example, start with the library that you'll use tasty-bench to benchmark, describe a bit about the library (like where do we expect the heavy computation to be, which code should we even be benchmarking?), and then describe how the benchmark suite was implemented using tasty-bench.

Fair enough, but this is different from the current presentation of weigh, which is more a tutorial than a case study.

doyougnu commented 1 year ago

Fair enough, but this is different from the current presentation of weigh, which is more a tutorial than a case study.

Ah very true. Let's do a tutorial then. I think it is better to be consistent than to do something special for tasty-bench. My issue with the weigh chapter was that I felt that it is a useful library without a good tutorial. I'm not entirely sure I got to good tutorial but I do feel that it improved the documentation around weigh. So I suppose I'm being a bit biased towards tasty-bench because I think the tasty suite of packages are already very well documented, they are just documented using micro-examples. What I'm looking for is more "how to use tasty-bench to benchmark my system" and less "how to use tasty-bench to benchmark this or that function". Hopefully that makes sense.

Bodigrim commented 1 year ago

I actually like the idea of writing case studies instead of tutorials; I think the official README for tasty-bench is reasonably good and there is probably not much point to retell it. The question is more about whether you'll be able to integrate case studies in the fabric and structure of the book.

doyougnu commented 1 year ago

I actually like the idea of writing case studies instead of tutorials; I think the official README for tasty-bench is reasonably good and there is probably not much point to retell it.

Yes I think there is a paucity of content in a lot of the Haskell community that is simply we used foo this way or that way to achieve bar where bar is either some business outcome or engineering outcome.

Perhaps the book is missing a part or section on tooling case studies or experience reports of some kind. I have a draft chapter called The Recipe which I plan to remove (Well rewrite with explicit references to David Agans' work. But the whole idea of the chapter was an attempt to give an overview of how to do some root cause analysis. This chapter essentially structures the book: in order to do root cause you use tooling for discovery, you hypothesize, change something based on the optimizations and techniques, then use tooling to observe the change.

So a case study on using tasty-bench to benchmark something is not out of scope per se. And I think the narrative would be something like in project foo, we have <n> sub-systems, we use tasty-bench to monitor performance of each system in CI. This ensures code and product quality as we ship or something like that. The key for me is that the chapter has the perspective of a system, not just a code snippet. For example, if we had tasty bench in GHC I would use this MR as a case study because the narrative is good:

the only thing that would change had we used tasty-bench instead of ticky would be our framing of the problem allocations vs wall time spent in a function. But the key point is that we view the benchmarking libraries as quality control and as tooling for performance engineering. Then the case study for the tooling would be about how to setup the benchmarks to be: (1) accurate (2) maintainable (they need to be useful in the lifetime of the project for a long time) (3) extensible (the project will change and this monitoring system must also change without a lot of code churn. (4) reliable (see #7). And we would tie this in to the sections in the Recipe chapter about observing system phenomena using tooling.

What do you think? To be clear I'm brainstorming here :)

Bodigrim commented 1 year ago

The key for me is that the chapter has the perspective of a system, not just a code snippet.

This might be challenging in the context of a book of limited volume. But I see your point.

I think my best option is to write some blog posts on benchmarking and dual license them so that you can potentially include them into the book and edit as necessarily.

doyougnu commented 1 year ago

I think my best option is to write some blog posts on benchmarking and dual license them so that you can potentially include them into the book and edit as necessarily.

Sounds good.