networknt / microservices-framework-benchmark

Raw benchmarks on throughput, latency and transfer of Hello World on popular microservices frameworks
MIT License
702 stars 127 forks source link

run the tests on different machines #12

Open domdorn opened 7 years ago

domdorn commented 7 years ago

Hi,

your tests are biased because they run on the same machine. you should at least use one machine as server and another machine doing the tests, as else, the server and test runner are competing about resources and your results may be flawed.

make sure to use real hardware, not virtualised servers.

stevehu commented 7 years ago

@domdorn The purpose of the benchmarks is to gauge the raw throughput and latency under max throughput in order to find out the overhead of each framework. I agree with you that the client(wrk) and the server(framework) should be located on two different physical boxes; however, the network will be the bottleneck for some of high performance servers. With the client on the same box, it uses some CPU resources but the CPU usage is linear. It is a little unfair for these servers that can serve around 1 million requests but we are not looking at the absolute numbers but relative numbers against Java EE based servers far down below in the chart. Ideally I should run the test on cloud VMs but the computing power of these environment are fluctuating from time to time. I am running the test from one of my desktops with an aging i5 4core/4thead to simulate a large t-shirt VM which is our target production environment. Given we have the source code, a lot of people have run the benchmarks on their own hardware with similar result. I am planning to dockerize these but never got enough time to do so. Thanks for your recommendations.