danielpalme / IocPerformance

Performance comparison of .NET IoC containers
https://danielpalme.github.io/IocPerformance
Apache License 2.0
876 stars 158 forks source link

Discussion: How to remove "noise" from the benchmark #83

Closed danielpalme closed 12 months ago

danielpalme commented 6 years ago

At the moment the benchmark contains 35 containers.

Question 1: Should containers be categorized as relevant / not relevant for the benchmark?

Question 2: Based on which criteria should containers be categorized as relevant / not relevant?

Question 3: What should happen with containers that are not relevant

dotnetjunkie commented 6 years ago

Not actively maintained

How do you determine 'Not actively maintained'? For instance, do you consider Ninject "actively maintained". The last commit was in April this year. Last question was in March. If you have determined on what the criteria are for this, how do you prevent abuse (like the maintainer doing a dummy checkin once in a while).

Few features, no integration options

How do you determine 'Few features'? For instance, there is a quite elaborate DI feature comparison test here (by @ashmind), but the list of checked features is very objective. For instance, what some find an essential feature, others might find harmful. Simple Injector for instance seems to lack in features on that list, while it has a very well documented design philosophy that explains why certain features should (in Simple Injector's vision) not exist. On the other hand, those feature tests are very limited in their scope, which unfortunately says very little about the quality of the underlying implementation.

ipjohnson commented 6 years ago

@dotnetjunkie so is it your stance that nothing should be done?

Also I'm not sure what this mean "On the other hand, some containers seem to just try to get a 100% score on that comparison, while the quality of their implementation might vary.". Do you have a reason to assert there might be quality problem with other containers because otherwise the comment just seems out of line.

dadhi commented 6 years ago

As it is a performance benchmark I would consider to put a certain threshold on benchmark results. Then move "slow" containers under the cut. Update the fast containers often, and slow containers per someone's/maintainer request only.

Personally, when checking DryIoc perf using this benchmark, I am putting FastAttribute on few containers only and filter the rest. This helps me to get fast and focused results.

ipjohnson commented 6 years ago

@dotnetjunkie so because you believe SimpleInjector's implementation is better you feel it's ok to question other containers quality? Have gone through and tested all these other containers that you are implying aren't as good or just making assumptions that they can't be as good?

ipjohnson commented 6 years ago

@danielpalme would it be possible to maybe make the result page more interactive so containers could be filtered out on some criteria (speed, released in the last year, etc). This way things aren't removed persay but they can easily be hidden.

danielpalme commented 6 years ago

@ipjohnson Yes @lamLeX had the idea to create an interactive page (see #82).

Let's wait on his result and then we can more filter options.

dotnetjunkie commented 6 years ago

@danielpalme, let me retract that previous statement of mine, since it distracts from the main conversation in this thread. The point I was trying to make was that comparison of features is difficult because a feature description like 'supports open generics' is just too broad. It will most likely in every case disqualify certain containers where they may have other compelling features that won't fit in such a short description of a feature.

danielpalme commented 6 years ago

I converted the benchmark to .NET 4.7 and VS 2017. I dropped packages.config in favor to PackageReference. I also dropped the containers DryIocZero, Petite.Container, TinyIoC since they can't be properly installed via Nuget.

ipjohnson commented 6 years ago

@danielpalme you can also drop StyleMVVM off as it became the basis for Grace.

dadhi commented 6 years ago

@danielpalme, new NuGet made it more harder to deliver content with packages, e.g. DryIocZero. Hope to solve it soon.

danielpalme commented 6 years ago

@ipjohnson I also removed StyleMVVM

dzmitry-lahoda commented 6 years ago

Question 1: Should containers be categorized as relevant / not relevant for the benchmark?

Yes

Question 2: Based on which criteria should containers be categorized as relevant / not relevant?

(lastUpdate > NOW - 2*DOT_NET_RELEASE_CADENCE) AND (contributors > 2) AND (nugetDownloads > 10000 OR gitStars > 20)

Only few months of history

I have seen project which got to top with short history because where very cool and got NH effect.

Few features, no integration options

Very hard to measure.

May kind of metrics exists in a wild? E.g. some open source projects metrics.

Question 3: What should happen with containers that are not relevant

Completely remove them from the benchmark

Yes. Git stores history for revival later.

Create a separate list with legacy / not relevant containers

Put removed into list with reason why removed and date.

jzabroski commented 5 years ago

I think there are SO MANY IoC frameworks (I hate the word container as I believe it fundamentally breaks encapsulation). The best thing to do is to start showing trends in libraries month over month so that the open source community can begin to choose libraries actively being made faster and with more features.

If we adopt benchmark.net, then we can really get some hardcore data.

dzmitry-lahoda commented 5 years ago

I guess some IoCs will fail to run resonably or real graphs https://github.com/danielpalme/IocPerformance/issues/86 . Hence it is possible to both get real evaluation in benchmarks and stop measuring containers which are fast(first) on small non realistic setups(cheating), but fail with timeout on real.

I hoped issue will be voted so I may proceed, but it is closed. Also I guess Java world may have already know specifications for objects graphs we may reuse.

dzmitry-lahoda commented 5 years ago

https://github.com/JSkimming/abioc is dead for 1+ year. Would accept pull to drop it from source?

jzabroski commented 5 years ago

I think abioc is very valuable in generating ideas for other implementations. I really dislike the idea that just because development stops on a product, it is dead. Newtonsoft.Json for example has tons of bugs due to really bad ideas and it has many releases - should we be happy about its pile of bugs?

On Wed, Feb 6, 2019 at 9:36 AM Dzmitry Lahoda notifications@github.com wrote:

https://github.com/JSkimming/abioc is dead for 1+ year. Would accept pull to drop it from source?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/danielpalme/IocPerformance/issues/83#issuecomment-461044866, or mute the thread https://github.com/notifications/unsubscribe-auth/AAbT_Qo-0TXInShAG3Xwvx-LV8ggteKDks5vKuhggaJpZM4OrG9d .

ipjohnson commented 5 years ago

@jzabroski maybe it’s interesting or maybe the implementation didn’t get fleshed out because supporting more features would mean having more registrations leading to the container losing its performance edge.

I’m not making a case for or against removing it. Just pointing out there might be a reason to why it doesn’t have anywhere near the features of other containers.

jzabroski commented 5 years ago

If there is a critical flaw in abioc's implementation, then point it out. Speculation and hypotheses without scientific process is just gossip.

On Wed, Feb 6, 2019 at 10:54 AM Ian Johnson notifications@github.com wrote:

@jzabroski https://github.com/jzabroski maybe it’s interesting or maybe the implementation didn’t get fleshed out because supporting more features would mean having more registrations leading to the container losing its performance edge.

I’m not making a case for or against removing it. Just pointing out there might be a reason to why it doesn’t have anywhere near the features of other containers.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/danielpalme/IocPerformance/issues/83#issuecomment-461074875, or mute the thread https://github.com/notifications/unsubscribe-auth/AAbT_RjjwXGgrPrElpXj6M883qRqX6hyks5vKvqYgaJpZM4OrG9d .

ipjohnson commented 5 years ago

It’s not a flaw it’s the way it’s designed. It compiles one giant switch statement with all possible registration. As more registrations are added the switch gets bigger.

So no not gossip, just the opinion of someone that’s looked at the code and done a fair amount of DI benchmarking.

jzabroski commented 5 years ago

Then a benchmark similar to the one Fyodor Soikin created for Maxim Volkau ~4 years ago could be helpful: https://bitbucket.org/dadhi/dryioc/issues/152/exponential-memory-performance-with

(This is something I've been wanting to try...)

On Wed, Feb 6, 2019 at 11:38 AM Ian Johnson notifications@github.com wrote:

It’s not a flaw it’s the way it’s designed. It compiles one giant switch statement with all possible registration. As more registrations are added the switch gets bigger.

So no not gossip, just the opinion of someone that’s looked at the code and done a fair amount of DI benchmarking.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/danielpalme/IocPerformance/issues/83#issuecomment-461092666, or mute the thread https://github.com/notifications/unsubscribe-auth/AAbT_YJDQBnWH665Rryf-e7ipOE9d76iks5vKwUdgaJpZM4OrG9d .

ipjohnson commented 5 years ago

Always a big fan of @dadhi work, and I'd definitely be interested in more benchmarks.

That said these days I'm not spending much time doing performance tweaking but I did write some benchmarks that I think are interesting here. I'd be happy to take a pr for abioc if you wanted to add it.

jzabroski commented 5 years ago

@ipjohnson After thinking about it, probably a reasonable thing to do would be to cut out most frameworks not in say the top 20% - sort of how companies cut off the bottom performing salesforce people - and port the top 20% to benchmark.net. This would allow achieving my desire of seeing trending in quality of library code over time.

ipjohnson commented 5 years ago

The topic of Benchmark.net has come up a couple time here.

Benchmark.net is part of the reason I wrote my own benchmarks, that plus a desire to dive deeper into some topics than people want/need.

jzabroski commented 5 years ago

What I now want, in Q2 2019, is the ability to see trends over time:

  1. are some containers getting better with each release
  2. are some containers addressing specific issues highlighted by the benchmark

Ideally I want a HighCharts-like plot with labels on key data points, and the ability to filter by specific labels. This can all be done with an ELK backend. It would be even more awesome if extra data was pulled in, such as NuGet downloads.

There are simply so many containers these days. Frankly, the only ones I pay attention to are abioc, fInjector, SimpleInjector, Grace, and DryIoC. Autofac is useful as many people still use it commercially.

danielpalme commented 5 years ago

@jzabroski: Just go ahead :-)

Here's the complete historic data starting in February 2015: History.zip

jzabroski commented 5 years ago

@danielpalme Thanks. I can try a Proof-of-concept at some point this year. I'll start by defining an ElasticSearch document model - what you have as XML is nearly suitable, but each benchmark should probably be a column in the document model.

sgf commented 5 years ago

about:Not actively maintained with a sample or Wiki or any Start guide is a very important things ?

danielpalme commented 4 years ago

@jzabroski I added a little chart to the results page. It shows the performance evolution over time for every container/benchmark: See https://danielpalme.github.io/IocPerformance/

jzabroski commented 4 years ago

Excellent job.

MinhThienDX commented 4 years ago

Hello @danielpalme, what about remove "noise" on your website too? I saw that you have removed StyleMVVM in this repo but StyleMVVM still listed on your website www.palmmedia.de/Blog/2011/8/30/ioc-container-benchmark-performance-comparison

I think we can remove these containers below too. Format: Container & year of last commit or last NuGet update fFastInjector 2015 Funq 2013 Griffin 2017 HaveBox 2014 IfInjector 2013 LightCore 2013 MEF 2012 (last commit on github is 2017 after move from codeplex) MEF2 dead URL MicroSliver 2012 Mugen 2016 Munq 2012

danielpalme commented 4 years ago

@MinhThienDX You are right, the list is outdated. But currently other things are much more important and valuable (for me) than updating that list.

MinhThienDX commented 4 years ago

Hello @danielpalme, I understand that. How about I try to come up with a PR and you just need to check then merge that PR, sound good?

danielpalme commented 4 years ago

@MinhThienDX: Sure

Wsm2110 commented 1 year ago

@danielpalme Not really sure if this is still ongoing...

The baseline(starting point) of each benchmark differs based on the enabled features. Enabling more benchmark features results in more delegates being created, and since every container uses a hashmap or some sort of collection, resulting in more hash collisions.

Each benchmark no matter what features enabled should have the same starting conditions in which this case it's not.

As far as i can tell these benchmarks are far from perfect.

Benchmark should start out from the same baseline(prefer a clean slate) -> registering objects - resolving them.

Just wondering what you think about this, willing to put in the work providing a pull request.