rust-lang / rust

Empowering everyone to build reliable and efficient software.
https://www.rust-lang.org
Other
98.14k stars 12.69k forks source link

Stabilize `#[bench]` and `Bencher`? #66287

Open SimonSapin opened 4 years ago

SimonSapin commented 4 years ago

I’ll take the liberty of copying a section of @hsivonen’s Rust 2020 blog post:

Non-Nightly Benchmarking

The library support for the cargo bench feature has been in the state “basically, the design is problematic, but we haven’t had anyone work through those issues yet” since 2015. It’s a useful feature nonetheless. Like I said a year ago and the year before, it’s time to let go of the possibility of tweaking it for elegance and just let users use it on non-nighly Rust.

Indeed the existing benchmarking support has basically not changed in years, and I’m not aware of anyone planning to work on it. To keep reserving the right to make breaking changes is not useful at this point. Custom test frameworks offer another way forward for when someone does want to work on better benchmarking.

So I’d like to propose a plan:

@rust-lang/libs, @rust-lang/lang, do you feel this needs an RFC?

Centril commented 4 years ago

do you feel this needs an RFC?

Given that "basically, the design is problematic" I'd say yes, we should have an RFC that justifies and explains the overall design (as if it was an unimplemented feature).

cc @rust-lang/dev-tools

alexcrichton commented 4 years ago

I'd like to paste parts of this comment here:

I'm personally pretty wary and against stabilizing something just because progress isn't happening on it. Stabilization takes real work and has a cost. Simply because an API exists doesn't mean we should stabilize it, even if it's been sitting for months. I think it's pretty bad design to get things into the standard library, then point out it's sat with no feedback for months and propose stabilization.

I would ask again that the justification for stabilization has "it's not been touched for years" only taken as an input of whether to start the stabilization process, not an implicit ratification that the current design is "obviously good enough because no one has changed it".

There have been repeated attempts to stabilize the benchmarking infrastructure in the past and they've all been stymied because the libtest benchmarking support just isn't that great. It's great for quick and dirty things, but like #[test], it does not have extensibility hooks, it's not always the easiest to use, etc. Stabilization often also gets quickly into the weeds of "what to do about std::test?" or "what about custom test frameworks?" and such. I personally think it's very difficult to avoid these questions when proposing stabilization.

Given the history of this feature I would personally expect an RFC to justify why the current design is suitable for stabilization (or the proposal here). Again, though, the RFC will need to justify its design irrespective of the amount of time the design has sat in libtest for now.

Manishearth commented 4 years ago

Note that we already have a landed eRFC for custom test frameworks, and some implementation work there. It would be nice to focus efforts on getting that finished.

On Mon, Nov 11, 2019, 7:12 AM Alex Crichton notifications@github.com wrote:

I'd like to paste parts of this comment here https://github.com/rust-lang/rust/issues/48043#issuecomment-544581895:

I'm personally pretty wary and against stabilizing something just because progress isn't happening on it. Stabilization takes real work and has a cost. Simply because an API exists doesn't mean we should stabilize it, even if it's been sitting for months. I think it's pretty bad design to get things into the standard library, then point out it's sat with no feedback for months and propose stabilization.

I would ask again that the justification for stabilization has "it's not been touched for years" only taken as an input of whether to start the stabilization process, not an implicit ratification that the current design is "obviously good enough because no one has changed it".

There have been repeated attempts to stabilize the benchmarking infrastructure in the past and they've all been stymied because the libtest benchmarking support just isn't that great. It's great for quick and dirty things, but like #[test], it does not have extensibility hooks, it's not always the easiest to use, etc. Stabilization often also gets quickly into the weeds of "what to do about std::test?" or "what about custom test frameworks?" and such. I personally think it's very difficult to avoid these questions when proposing stabilization.

Given the history of this feature I would personally expect an RFC to justify why the current design is suitable for stabilization (or the proposal here). Again, though, the RFC will need to justify its design irrespective of the amount of time the design has sat in libtest for now.

— You are receiving this because you are on a team that was mentioned. Reply to this email directly, view it on GitHub https://github.com/rust-lang/rust/issues/66287?email_source=notifications&email_token=AAMK6SHKBLHBBFH43A5W7MDQTFY5TA5CNFSM4JLTX5AKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEDXDXFA#issuecomment-552483732, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAMK6SGHBVC7LRFMCRPUNLTQTFY5TANCNFSM4JLTX5AA .

SimonSapin commented 4 years ago

@Manishearth Yes, this proposal specifically does not involve opening a design discussion beyond small API-surface-only tweaks, so as to not divert efforts and limited team bandwidth from custom test frameworks or other work areas. (And I don’t feel up to personally leading such a discussion at the moment.)

Manishearth commented 4 years ago

Oh, yeah, I agree, I actually think we should find a path for stabilizing bench itself at this point.

One thing I will point out is that a lot of people want some form of bench framework (and are likely willing to put in the work to design and stabilize just that), whereas far fewer people want custom test frameworks, and the work for that is proportionally larger.

I don't want to say that the justification for stabilization is "it hasn't been touched for years", but I do want to say that the fact that benching has been blocked on custom test frameworks has led to this annoying impasse where there just aren't enough interested people to do the huge amount of work to complete custom test frameworks. I'd still prefer if that happened, but that doesn't seem likely. I feel like as a community we have a tendency to fall into this trap, so I do see this issue as a push for a way out.

That said, I already opened an RFC for this and it got closed: https://github.com/rust-lang/rfcs/pull/2287 . The reasoning there hasn't changed much, though I'd argue that the languishing of custom test frameworks is a valid argument to say that we should revisit this.

SimonSapin commented 4 years ago

"It hasn't been touched for years" isn’t an argument for stabilization, it’s a sign that that more time as unstable is unlikely to lead to improvements to this benchmarking framework. Instead, I suspect that improvements will come as something entirely new that can be used through custom frameworks.

I think this leaves two realistic outcomes for #[bench]:

gilescope commented 3 years ago

As a random datapoint: I would love to see exercism.io students being able to do simple benchmarks on their solutions to see if one way is faster than another. I know criterion is the gold plated solution but it would be nice to have something that works out of the box on stable.

Don't let perfect be the enemy of the good: If measuring performance is easier, then more people will do it, lowering the CO2 impact that rust solutions have on the world.

Oliboy50 commented 3 years ago

As a newcomer to Rust, I just wanted to say that it feels frustrating to learn that there is an awesome built-in benchmarking feature (which is something that I've never seen in the few languages I've worked with) but we "cannot" use it and doesn't seem like it will happen soon (since the following is written in the doc: The tracking issue for this feature is: None. )

Why is it even mentioned in The Book then? 😢

https://doc.rust-lang.org/book/ch11-01-writing-tests.html

The 0 measured statistic is for benchmark tests that measure performance. Benchmark tests are, as of this writing, only available in nightly Rust. See the documentation about benchmark tests to learn more.