Closed neon-sunset closed 7 months ago
Hi @neon-sunset,
Thanks for your interest in this project. If this is a secure way to run some production app, let's do it cc @i4004 @ShreyasJejurkar @panesofglass
The most important thing in this project, is to show how tech could perform, in a secure / safe production environment. Saying that it is not about tweaking tech to make them outperform without thinking about safety we need a production environment
I do not know C# (yet) so I cannot say if your suggestion is what we should do for this project
Could you make a false commit ? just add a comment on the C# docker file to trigger an accurate CI jobs
Java frameworks (some, from what I've seen) specify that the handler code is non-blocking and can be executed inline - this enables similar behaviour here which should reduce context switching, improving throughput.
It's not about security but a tradeoff for workloads known to take little time to execute in which case it is more performant to do so inline (well, dispatch socket completion inline in this case, it does not block the incoming handling part of the execution flow which is undesirable in Kestrel usually).
Could you re-run failed job, please? I can see other jobs often failing with the same issue (no route to host, as if it just does not schedule the container in time).
It is totally OK that BeetleX is failing (at least) it is already failing on master
@neon-sunset This mode is not the default one ?https://learn.microsoft.com/en-us/aspnet/core/performance/memory?view=aspnetcore-7.0#workstation-gc-vs-server-gc
It depends. It's a standard choice depending on the workload type (even if a particular .NET workload doesn't use it by default). If you were to disallow such configuration flags from the benchmark suite, then similar would have to be removed from all other languages otherwise it's not a fair comparison. I'm just making sure it's enabled here, because sometimes it's not.
(I also don't understand such a discrepancy in numbers because plaintext benchmark should be next to or faster than vert.x, activej, etc.)
I do totally understand. Could you please point me where there is such trade-offs ?
The idea behind this is to say that if the framework / language take such a choice, it is for a reason, and we should take this into account.
I understand, this can be the fairest comparison of the word, but it could be ok when comparison is java/c# or js/python, I mean some there is other factors (eco-system ...). The idea here is to reveal how tech can perform for what they are (for example vertx and fastapi are not the fairest comparison, nether)
It's a minor change that's not even worth expending this much discussion effort on it (especially compared to effort done by other frameworks). WKS vs SRV GC flavours mostly differ by WKS being somewhat better at latency of small allocations and is much more aggressive at keeping its memory footprint low, at the cost of throughput which can significantly hurt server workloads.
Anyway, I can see that e.g. Beetlex (even though it's dead in CI at the moment) already explicitly opts into that too: https://github.com/the-benchmarker/web-frameworks/blob/master/csharp/beetlex/web.csproj#L6
If there is any significant win, I'd expect it to come from inline socket completion, which a workload in docker might be sensitive to.
Please keep in mind that the setup (local docker that is used to generate results) is temporary. This project is mean to run on some more robust infrastructure, the choice of a local docker was only made to favour some frameworks addition.
If you can fix beetlex, be my guest (good catch for this config, I think we should remove it)
(this did not mean we should remove it, I'm just pointing out it's the case. this whole discussion is irrelevant because that's what other frameworks do, so it's pointless to carve out different rules for .NET)
As you can see happyx
, and globally nim
, stays on top for a long time now.
https://web-frameworks-benchmark.netlify.app/result
I am open to any suggestion / typo founds when regarding frameworks that are tweaker for benchmarks (when not comfortable to run as-is in production)
Perhaps we mean something different by "in production"? All these settings are used in production when appropriate.
Sure, I mean : what can be recommended for a typical workload (let's say a server-side API)
This benchmark is atypical workload (no/little work done in request handlers). For highly loaded applications that exhibit similar characteristics, it is expected to use these flags (in fact, this is a conservative change).
The workload will change to reflect more real world usage (mimic database connection, caching and stuff that we generally use in our applications, I mean software builders)
So, back to the topic at hand. Is there a reason .NET workloads can't use these 3 settings? (and server GC one is just to make sure, most already would pick it anyway).
We can go with this config values. This values can be used in production scenarios. 👍
But I do have one question around GC server config to @neon-sunset; isn't sever GC is default GC for aspnetcore apis?
@ShreyasJejurkar that's correct, but I wanted to make sure it's enabled for all C# contestants, since this is what you want to use for throughput-oriented applications anyway. Maybe poor PR description on my end because inline socket completion is more important. This is a "blind" optimization anyway - I don't know (nor can troubleshoot) the exact reason why ASP.NET Core performs here so poorly (neither I can reproduce it locally), so it's worth to try to apply the options that are known improve throughput.
@neon-sunset yeah number are very less on this benchmarks for aspnetcore compared to others one on internet. But I agree we can try the options that we are known of.
I suggest to leave the default, and investigate about result with and without.
According to the documentation, gc server is the default. Then, except if a framework override default (and this is an other question) it should not make any difference.
Sure, let me open a PR that returns all Java benchmarks to JVM defaults too (as well as any other languages), to ensure consistency.
The complaint aside, ideally someone (I can do it too, but likely later) would need to reproduce and look at in detail at the whole benchmark suite on a local docker instance to see if there's some dependency/configuration flag that might be messing with supposedly good out of box defaults of aspnetcore base image.
I change this as draft, so as I know that it is more an investigation than real PR to merge
@waghanza I guess we can let this merge, all this options are valid and can be used in production.
@ShreyasJejurkar no need to be explicit here. I understand this is the default value, but :
C#
itself to make this choicelet me check result for with and without and see if there is some changes
@waghanza if that was the case, I don't see frameworks following that here anywhere in any benchmarks. For any compilation or run command people are passing additional options to the command and not sticking with defaults.
For example, look at the below command. This is just one example and there are many others.
Maintainers of this repo should take a hard stand against this and should make everything default and revise all the framework (dotnet benchmarks don't even have any options specified) then only it will be "fair and production-grade" benchmarks.
Totally agree with you @ShreyasJejurkar
We should stand with the default :-)
I'm the only maintainer of this repo, and I can not have an eye on all JVMs, CLR and other dynamic languages (at least I try to).
Feel free to pick the option that are non-default, and suggest a change.
I'll gladly accept it ❤️
I think you should go to each framework and see what they are doing when compiling or running applications and create a list of frameworks that add some custom options when doing the same. One GitHub issue with a list of it would be sufficient. And maybe tag the people who work on that corresponding area/language and give some timeline to update their framework options. If they didn't then don't include their framework in repo and so as in results as well.
You can do an announcement issue to grab the attention of people so that they will update. I will open PRs to remove options from all (will try to) FW.
This should significantly improve throughput (if wasn't enabled previously) by using throughput-optimized GC (it is a commonly used default for server deployments).
Additionally enables inline socket completion - if Java workloads get to do this, we get to do this too.