the-benchmarker / web-frameworks

Which is the fastest web framework?
MIT License
6.91k stars 641 forks source link

Running on a public cloud ? #92

Closed waghanza closed 5 years ago

waghanza commented 6 years ago

Hi,

I found this project very useful, at least an element of my technical choices.

I think that running all tests on one public cloud

Could help / lead in technical choices ?

What do you think about that ?

Regards,

tbrand commented 6 years ago

That's good idea but there are some issues for it.

As a founder of this project, I want(have) to take all results of this benchmark. But you can fork this and choice some language as targets.

Following tips might help you.

# Choice ruby, crystal and go as targets
make benchmarker ruby crystal go

# Running benchmark
bin/benchmarker ruby crystal go

# If you want to record the results onto README.md
bin/benchmarker ruby crystal go --record

Thanks!

waghanza commented 6 years ago

Hi @tbrand,

You underline multiples arguments here.

waghanza commented 6 years ago

@tbrand @OvermindDL1 what do you think ?

@tbrand Objective-C could compile on docker => https://store.docker.com/community/images/dexec/objc it is to take count on https://github.com/tbrand/which_is_the_fastest/issues/123

OvermindDL1 commented 6 years ago

I'm NOT a fan of performing benchmarks on a cloud/shared anything as then you will get inconsistent results (well you would get inconsistent results on a mac desktop too). An 'empty' dedicated server is what should be used (see my other issue with some results from that among other tests, which really should be built in to this too for statistical testing). https://github.com/tbrand/which_is_the_fastest/issues/101

waghanza commented 6 years ago

@OvermindDL1 I understand you concern, I'm not a fan either. For me running, running on shared environments COULD break result consistency.

However, cloud IS NOT shared. Of course, technically an hypervisor is used

However, running on cloud mean isolation (all instances has their own CPU / memory), at least cloud providers I have tested

OvermindDL1 commented 6 years ago

I know cloud is not shared, but cloud does not always imply a 'set' instance of cpu/memory. Those can be migrated in surprising ways and though you may keep a speed/amount of cpu and memory there can be sudden and surprising latencies that will not occur on a dedicated system due to switching behind the scenes.

waghanza commented 6 years ago

I understand. However :

OvermindDL1 commented 6 years ago

is our goal to be closest to production-ready application usage ?

This is expressely not the goal of this repository. ;-)

We want to know the response time (routing time), not a usability.

And the latency randomness and other issues that cloud would induce would just induce random noise into the benchmarks, which is NOT what you want in a benchmark. Just because in some situations in real-life may induce noise, you really really do not want those in a benchmark. (Which is also why tbrand really needs to stop running the benchmarks on a noisy desktop... >.>)

bar-metal / full isolation are expensive

Not even remotely? I have a 16 core with 16gigs ram dedicated server that I test on here, see: https://github.com/tbrand/which_is_the_fastest/issues/101

waghanza commented 6 years ago

This is expressely not the goal of this repository. ;-) So what is the main goal ?

@OvermindDL1 I do not understand your last point

OvermindDL1 commented 6 years ago

So what is the main goal ?

Pure speed testing with as few other variables as possible.

@OvermindDL1 I do not understand your last point

I.E. I can perform dedicated server tests with ease. It's a build server I own and it's trivial to suspend the CI for a period as I run tests, as I did in that post. In the post I show the statistical output of the tests and if run on a cloud or shared system then these would vary far more wildly that a statistical system could show but would otherwise corrupt the output of the current setup (as tbrand is currently doing by running it on a desktop, why do you think his results wildly differ from mine).

waghanza commented 6 years ago

Pure speed testing with as few other variables as possible. yes, but it will be accurate to use a particular usage, production usage. actually, django run on gunicorn (no built-in server) but flask use buit-in one. none of enterprise project, using flask, run on built-in

@OvermindDL1 I understand that you have different result than @tbrand, not the same hardware ... so why not running on a standard (and usable by others) hosting, aka cloud ?

OvermindDL1 commented 6 years ago

Because free server, and a lot more powerful than what most cloud things offer especially taking into account the prices... ^.^;

/me is still amazed at how slow crystal was on the tests here, wonder if related to them not running in production mode, should test again (ping me on that one issue if anyone wants the statistical tests to be re-run!)...

waghanza commented 6 years ago

:trollface: free ? it is amazing, it don't known about free hardware

anyway, my personal purpose IS to test raw-performance in each frameworks, but in the closest way as enterprise does use those frameworks

OvermindDL1 commented 6 years ago

:trollface: free ? it is amazing, it don't known about free hardware

It's quite common! Once it's paid for you get free unlimited use of it! Excepting cost of electricity (and maybe internet if it's not shared out to something else, which mine is). ;-)

IS to test raw-performance in each frameworks,

Ah so you definitely don't want weird latencies popping up!

waghanza commented 6 years ago

It's quite common! Once it's paid for you get free unlimited use of it! Excepting cost of electricity (and maybe internet if it's not shared out to something else, which mine is). ;-)

Sure, it not free but the major cost part is initial ;-) but still not the same logic as the majoirty of businesses

Ah so you definitely don't want weird latencies popping up!

I don't understand

OvermindDL1 commented 6 years ago

I don't understand

If you are wanting to test raw throughput without uncontrollable internet and system conditions interfering, then you want as stable a test-bed as possible. :-)

waghanza commented 6 years ago

ah, ok you say that the internet connection stability could case result inconsistency

OvermindDL1 commented 6 years ago

you say that the internet connection stability could case result inconsistency

That is one possible condition, hence why you'd prefer to test between 2 machines on a dedicated local wired network (which I also have, though this repo's built-for single server right now, which is not entirely reliable either since you should never test from the same system that the server is running on), with no other processes running on them, dedicated hardware with no chance of migration or anything else of the sort, etc...

Running it under any other conditions (like on a desktop machine) introduces in changes that be be quite significant (like the latest update that puts Phoenix faster than Plug, which would never ever happen as Phoenix is just 'more' plugs on top of Plug).

waghanza commented 6 years ago

interesting point of view you must do a lot of automation, to provide a clean and isolated environment to run each framework, thus

even in cloud we can run on a dedicated connection (or as closest as a local one)

OvermindDL1 commented 6 years ago

you must do a lot of automation

Ungodly huge amounts. CI build systems for everything from C++ to Elixir and more (Rust recently growing). Server deployment automation. Etc... I have no clue how I've even reached this point but even just me personal non-work stuff is massive... >.>

waghanza commented 6 years ago

I see that it is a point to run on cloud, though, at least to decrease your time automating everything

OvermindDL1 commented 6 years ago

I see that it is a point to run on cloud, though, at least to decrease your time automating everything

Except that would cause me 'more' work, as well as substantially more cost. ^.^;