rdlowrey / auryn

IoC Dependency Injector
MIT License
722 stars 65 forks source link

New benchmark says Auryn is extremely slow #151

Closed garrettw closed 7 years ago

garrettw commented 7 years ago

In new benchmark results released today, GitHub user "kocsismate" says that Auryn is the slowest or next-to-slowest DIC among the packages tested.

Any idea why that might be, or what could be changed to improve that?

morrisonlevi commented 7 years ago

Haven't looked yet, but if your dependency injection container is being slow maybe you should redesign to not need one? It's a tool of convenience.

It can probably be improved but I don't think this is exactly the component people should be looking at for when they have speed issues.

morrisonlevi commented 7 years ago

Before measuring each test, containers are warmed up so autoloading can take place and caches can be fed. Then all tests are performed 10 times [...]

This isn't a real-world scenario; I'd ignore this particular test myself. Provisioning typically happens on application startup; they will typically not have caches loaded and the same path typically won't be executed multiple times in a row. Benchmarks are hard to do correctly; I don't blame them for trying or doing it wrong. Really there needs to be a benchmark of small applications written such that you can drop in different containers and see a more realistic impact on overall execution time.

If anyone has any interest in speeding up Auryn that is fine as long as it leaves the general architecture intact.

rdlowrey commented 7 years ago

This is one of those things I've never worried about. If you perform any network io at all it's an order of magnitude more impactful on your response time than DIC overhead. It's just not an important factor in the bigger picture. Would it be possible to improve the speed of the library given some recent language improvements? Yes, significantly. Also, the library isn't doing any caching right now to optimize for the web sapi environment.

It's more a matter of having time to work on than anything else.

kelunik commented 7 years ago

@garrettw: Feel free to provide PRs improving it for the web sapi. We're mostly working on Amp things and for long running applications, this kind of overhead doesn't matter.

Danack commented 7 years ago

@garrettw Just one more point, when I last checked, at least some of the other libraries don't have that much error checking. For example Dice had (when I looked) no checking for circular dependencies and so just recurses until PHP crashes, whereas Auryn gives you a sensible exception.

Some of the others also have significantly fewer features....a DIC without delegate and prepare functionality is not that useful to me.

Yeah, we could possibly make Auryn faster by removing the error checking and features....I don't think that would be a good idea though.

garrettw commented 7 years ago

As I've worked with Dice's code a fair bit, I've seen a line or two somewhere in there that is supposed to help with circular deps, but in my own testing I haven't gotten it to actually work, so maybe I'll take another look at that and see if I can improve it. Good feedback though, thanks.

I'm actually working on my own DIC right now that uses some ideas from Dice as well as Auryn, so this is helpful to me. 😄

Danack commented 1 year ago

Because I forgot to save them before, here are the benchmark results currently, running on an Apple M1 Pro:

\Auryn\Test\Benchmark\ExecuteBench

    benchnative_invoke_closure..............I9 - Mo0.424μs (±0.37%)
    benchnative_invoke_method...............I9 - Mo0.519μs (±1.09%)
    benchinvoke_closure.....................I9 - Mo2.637μs (±0.31%)
    benchinvoke_method......................I9 - Mo4.726μs (±0.27%)
    benchinvoke_with_named_parameters.......I9 - Mo6.137μs (±0.37%)
    bench_make_noop.........................I9 - Mo3.783μs (±0.42%)
    bench_make_two_dependency_object........I9 - Mo17.916μs (±0.33%)

\Auryn\Test\Benchmark\SlightylyMoreComplicatedBench

    bench_make_non_trivial_object...........I9 - Mo29.259μs (±0.99%)

i.e. the non-trivial case could be 34 times slower, and still only just take a millisecond.