TechEmpower / FrameworkBenchmarks

Source for the TechEmpower Framework Benchmarks project
https://www.techempower.com/benchmarks/
Other
7.56k stars 1.94k forks source link

Changes / Updates / Dates for Round 22 #7475

Open NateBrady23 opened 2 years ago

NateBrady23 commented 2 years ago

Round 21 has concluded. The results will be posted shortly. We'd like to have a quicker turnaround for Round 22. We're hoping between Oct - Nov.

Checklist

billywhizz commented 2 years ago

i was just reading a thread on reddit about latest results and a commenter mentioned that "Stripped" scores are included in the composite results. I didn't think this was allowed/possible, but it turns out this is in fact the case, for actix at least.

the /json and /plaintext results in the composite scores for actix are from the "server" configuration which is marked as "Stripped". is this correct or a mistake in the collation of the results?

Yurunsoft commented 2 years ago

The sql queries currently tested are returning too fast. In the production environment, the table may have a lot of data, the query will not return so fast, and there are even slow queries. Suggest adding slow IO test.

https://github.com/TechEmpower/FrameworkBenchmarks/issues/6687


It is suggested to add memory usage indicator as one of the scoring criteria.

NateBrady23 commented 2 years ago

@billywhizz You're correct. That shouldn't be the case. That implementation approach was changed and never updated for the composite scores. Will see what I can do.

billywhizz commented 2 years ago

@nbrady-techempower yes - i didn't think it was possible when i saw the comment so glad i checked. by my calculation this would give actix a composite score of 6939, moving it down to ninth place behind officefloor, aspnet.core, salvo and axum. i haven't checked if same is happening with any others.

joanhey commented 2 years ago

It is not problem only from the TechEmpower people. We need to help all together for the health of the benchmark.

A lot of people don't understand a benchmark. They don't use it for refactor their frameworks to be faster and learn from others. They use it for make tricks and look faster, and not useful in a production app. But they use it like Bench Marketing only.

joanhey commented 2 years ago

When I said that we need to clarify the rules is for that:

We are changing the rules for that people, but all need to follow the rules.

It's like with Faf #7402, we all can learn from that, for good or bad. But nobody say anything about the process priority, it's OK but all or no fws. https://github.com/TechEmpower/FrameworkBenchmarks/issues/6967#issuecomment-994793643

For some time, the length of the server name is discussed but still without a solution. Before was a problem with the urls. It's only some bytes, but for 7 millions req/s is enough difference.

etc, etc But need to be in the rules, later they will be also tested. What is not possible, is that because it is not in the rules, you can't report a fw. Like caching the response in fortunes.

joanhey commented 2 years ago

Also I want to see which frameworks are using pipeline in plaintext. The same than we look for ORM or raw.

joanhey commented 2 years ago

Another big problem, that a lot of us have it is the servers.

We have enough information to inform the Ops. From round 18 to 19, we all see a big drop in performance with the mitigations for spectre/meltdown. If we change to the new kernel or new CPUs, we can help with the results to many company's.

Only with the kernel change #7321 or new servers. Or with the new kernel with the MGLRU, or the patch for older.

They make a very big impact, more than the fw that we use. And we need to inform and help about that.

joanhey commented 2 years ago

Another question, It's possible to hardcode the Content-Lenght: ?? For me it's a stripped version.

joanhey commented 2 years ago

@billywhizz always will be critics about any benchmark, but we need that be as few as possible.

joanhey commented 2 years ago

More questions

Is it realistic to have different variants and configs for every test?? Some use one for plaintext and json, and another for the db tests. But some have 1 for plaintext, 1 for json, 1 for updates, .... Anybody will do that for an app ??

If the fw is using JIT is very beneficial, but not realistic. Also some are talking about that #7358.

billywhizz commented 2 years ago

@joanhey yes - i tend to think there should be a single entry/configuaration allowed per framework and it should be the same codebase that covers all the tests. this would be much more "realistic" and would also massively decrease the amount of time a full test run takes - some frameworks have 10 or more different configuarations that have to be tested!

joanhey commented 2 years ago

I understand variants for different db or driver. But not per test (JIT) and also specific config.

In the same way, some fws use only 1 variant but the config to the bd is different for every test. How anybody will do that in a real app ??

billywhizz commented 2 years ago

@joanhey good point re. different databases - but apart from that i think the number of configurations per framework should be minimised. i also think it would work better overall if a run was only triggered when a framework changed rather than continually running every framework end to end. if we only ran on every merge just for the changed framework then maintainers would have to wait a lot less time to see results of changes.

my worry about introducing too many and too complex rules is it will just discourage people from entering at all, so there is a balance to be found between too many rules and allowing for innovation in the approaches.

franz1981 commented 2 years ago

Shared my thoughts on https://github.com/TechEmpower/FrameworkBenchmarks/issues/7475#issuecomment-1191789666 here https://github.com/TechEmpower/FrameworkBenchmarks/discussions/7358#discussioncomment-3200476 trying to both give my opinion but still convey that there are things, assuming what the purpose of the benchmark is, that should change a bit... although I agree with https://github.com/TechEmpower/FrameworkBenchmarks/issues/7475#issuecomment-1191814142 to not making it too complex to avoid folks to not get in.

joanhey commented 2 years ago

We can create an addendum, for those 1-2% devs who try to lie. With the more esoteric tricks.

About the run only with the changed frameworks. The bench not only help to make faster code, it's also very useful to find bugs. Now I'm searching for one with PHP8.1 and JIT, It's an intermittent bug. Without change the code, not the php version, there are a ~15% drop from run to run.

cirospaciari commented 1 year ago

We can create an addendum, for those 1-2% devs who try to lie. With the more esoteric tricks.

About the run only with the changed frameworks. The bench not only help to make faster code, it's also very useful to find bugs. Now I'm searching for one with PHP8.1 and JIT, It's an intermittent bug. Without change the code, not the php version, there are a ~15% drop from run to run.

@nbrady-techempower Robyn uses "const" with basically caches the string in Rust and avoid calling python at all, i think its not allowed right? https://sansyrox.github.io/robyn/#/architecture?id=const-requests

https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/frameworks/Python/robyn/app.py

romanstingler commented 1 year ago

Is it possible to get the tested framework version?

is it just me or should people be allowed to state
approach = "Realistic" only when they use framework built-in functions/methods i.e. not allowed to rewrite the router, not allowed to write a custom JSON function, as this is not a realistic approach for each framework. Otherwise, it is just a programming language benchmark with pulled in function of frameworks and minimal use of them or even overwriting them.

volyrique commented 1 year ago

The result visualization for every round has a link to the continuous benchmarking run that has been used (for example, round 21 is based on run edd8ab2e-018b-4041-92ce-03e5317d35ea). From the run you can get the commit ID, so that you can browse the repository at the respective revision. Then check the Dockerfile that corresponds to the test implementation you are interested in (and possibly any associated scripts in the implementation directory) to get the framework version that has been used. Unfortunately not all implementations keep their dependencies locked down properly - in that case your best bet is probably to check the build logs from the run. If that does not help, then I am afraid that other than making a guesstimate, you are out of luck.

fafhrd91 commented 1 year ago

Hi, @nbrady-techempower do you guys decide on dates for Round 22?

volyrique commented 1 year ago

@nbrady-techempower I noticed an issue that seems to have appeared back in December after continouous benchmarking started running properly again - the Dstat data is missing.

NateBrady23 commented 1 year ago

@fafhrd91 Nothing concrete yet. I've got to get in front of the servers and do some upgrades. I'd like to shoot for late March.

@volyrique Thanks, I'll take a look!

volyrique commented 1 year ago

@nbrady-techempower I had a closer look at the Dstat issue and it looks like a common problem. Unfortunately the tool appears to be unmaintained, and the closest thing to a drop-in replacement seems to be Dool.

graemerocher commented 1 year ago

@nbrady-techempower if there is not going to be an update to the benchmarks soon can you please remove micronaut from the dataset? Some random non-micronaut maintainer submitted a PR that was merged without the consent of the maintainers and artificially crippled our results (limited the connection pool size to 5) and we have had to live with people enquiring why it is so low in the results.

People unfortunately use these benchmarks to make technology decisions and if the data is wrong for long periods it is impacting us directly.

NateBrady23 commented 1 year ago

@graemerocher I'm sorry to hear about this. I can get the round 21 results from micronaut removed as soon as I'm back next week. I'm having a hard time finding the commit for that. Would you mind linking it so I can see other activity from that user.

graemerocher commented 1 year ago

@nbrady-techempower the history of what happened is in this thread https://github.com/TechEmpower/FrameworkBenchmarks/discussions/7618

Thanks for helping.

spericas commented 1 year ago

@nbrady-techempower Is there a tentative date for Round 22?

NateBrady23 commented 1 year ago

It's been almost a year since I said I'd like to start having more regular rounds... 😵‍💫

So, I think the biggest thing here was getting Citrine updated. Though I think all the things on the checklist are important, clarifying rules is a never-ending process, and if anyone thinks there's some in clear violation of any rules, please open a PR or an issue and ping me and the maintainers. Otherwise, I think we'll shoot for the first complete run in August.

shaovie commented 1 year ago

Good anticipation

graemerocher commented 1 year ago

@nbrady-techempower requesting this again as we keep getting questions. Please remove the invalid round 21 results for micronaut

NateBrady23 commented 1 year ago

@graemerocher This is done.

Please note that we have added a "maintainers" property in the benchmark_config.json. Feel free to add yourself and any others that you want to be notified in case changes are made to your tests. It's impossible for us to track who the maintainers are for each project, so this will help catch this issue in the future.

https://github.com/TechEmpower/FrameworkBenchmarks/wiki/Codebase-Framework-Files

everyone:

We're waiting on a few missing pieces for the new rack in the new place. We're hoping to be up and running on Thursday.

graemerocher commented 1 year ago

Thanks

alfiver commented 1 year ago

May I ask when the new round of benchmark results will be released?

NateBrady23 commented 1 year ago

@alfiver As soon as we get Citrine back online we're going to do a few full runs and then the next round. We'll be investigating the issues this week and have an update for everyone around Thursday.

NateBrady23 commented 1 year ago

Sorry, folks. No good news yet. One of the machines won't stay on. We're looking into it and will update when we can.

NateBrady23 commented 1 year ago

Ok, we got lucky! It was just the system battery. The servers are moved to their new home and looks like we're back up and running. Going to finish a full run to make sure everything looks good and then we'll send out a notice for when we're locking PRs for the this round.

fakeshadow commented 1 year ago

i was just reading a thread on reddit about latest results and a commenter mentioned that "Stripped" scores are included in the composite results. I didn't think this was allowed/possible, but it turns out this is in fact the case, for actix at least.

the /json and /plaintext results in the composite scores for actix are from the "server" configuration which is marked as "Stripped". is this correct or a mistake in the collation of the results?

I know this is a known issue and just want to point out the same is happening to xitca-web too. An unrealistic benchmark is counted towards total composite score unfortunately. It would be best if the misleading can be fixed in an official run. If a quick fix is not possible I suggest both xitca and actix mark their unrealistic bench as "broken" temporary until round 22 is finished.

shaovie commented 1 year ago

Ok, we got lucky! It was just the system battery. The servers are moved to their new home and looks like we're back up and running. Going to finish a full run to make sure everything looks good and then we'll send out a notice for when we're locking PRs for the this round.

Excuse me, do you have any good news?

NateBrady23 commented 1 year ago

Unfortunately we had some other issues come up, but hopefully resolved. The latest is here.

NateBrady23 commented 1 year ago

With the last run looking back to normal, it's time to actually set some dates for Round 22!

The run in progress will complete around 9/26. The following complete run will be a preview run. And we'll look to start the round run on 10/3.

We normally lock PRs down during the preview run. I would caution any maintainers on making adjustments to their frameworks during that time. As a reminder, we don't rerun individual frameworks for completed runs.

CosminSontu commented 12 months ago

Please wait for .NET 8 LTS release if it's not already accounted for. The release is planned for 14th of November this year. Worst case scenario, please use a release candidate of .NET 8. Thanks!

joanhey commented 12 months ago

Wait, for the next run, than you don't know the results ....

NateBrady23 commented 11 months ago

We had an internet outage that looks like it stopped the preview round run / communication to tfb-status. I'll be in the office tomorrow to see if the preview round completed successfully and kick of the official round.

NateBrady23 commented 11 months ago

Someone was able to restart the service. Since the preview round wasn't able to complete, we'll do one more preview round and move the Round 22 official run to start around Oct 11th.

graemerocher commented 11 months ago

could someone review and merge https://github.com/TechEmpower/FrameworkBenchmarks/pull/8478 before the run. Thanks.

macel94 commented 11 months ago

Someone was able to restart the service. Since the preview round wasn't able to complete, we'll do one more preview round and move the Round 22 official run to start around Oct 11th.

@nbrady-techempower any news?

:D

p8 commented 11 months ago

@macel94 It's running: https://tfb-status.techempower.com/