networknt / microservices-framework-benchmark

Raw benchmarks on throughput, latency and transfer of Hello World on popular microservices frameworks
MIT License
705 stars 127 forks source link

* change some config #16

Closed wangkaish closed 7 years ago

wangkaish commented 7 years ago

Hi @stevehu , I hava changed some config of my test , and change the test script like this "wrk -t{CPU_SIZE} -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 2048" , it may get a better result this time. Thanks

wangkaish commented 7 years ago

@stevehu About the exception "connection is reset by peer", it appeared at vertx testcase too, just ignore it for now.

stevehu commented 7 years ago

@kevin-better I will rerun the test tomorrow night with your new config but the wrk command line must be the same with other tests. You command line will reduce the CPU usage for the wrk and give your server more CPU power than other frameworks which is unfair for others. Thanks.

wangkaish commented 7 years ago

@stevehu I run this command "wrk -t4 -c128 -d30s http://localhost:8080 -s pipeline.lua --latency -- / 16", the cpu usage is always between 0 and 1 percent , I dont know if it is normal or not . I am unfamiliar with lua script,, could you please tell me what does the pipeline.lua do ? And why the result was distinct from arguments 16 or 2048 at the end of command line?

stevehu commented 7 years ago

How much percentage increase when you use 2048 vs 16? The wrk and the server might just competing threads. light-java framework can reach 2.18 million throughput if just increase from 16 to 50. You can find the test result in README.md.

wangkaish commented 7 years ago

`wangkai@wangkai-OptiPlex-9020:/var/apps/git-rep/microservices-framework-benchmark$ wrk -t8 -c128 -d15s http://localhost:8080 -s pipeline.lua --latency -- / 1024 Running 15s test @ http://localhost:8080 8 threads and 128 connections Thread Stats Avg Stdev Max +/- Stdev Latency 38.68ms 27.21ms 200.09ms 64.67% Req/Sec 255.95k 57.25k 603.77k 77.41% Latency Distribution 50% 36.10ms 75% 59.63ms 90% 86.82ms 99% 0.00us 30220290 requests in 15.09s, 3.74GB read Requests/sec: 2002224.02 Transfer/sec: 253.96MB wangkai@wangkai-OptiPlex-9020:/var/apps/git-rep/microservices-framework-benchmark$ wrk -t8 -c128 -d15s http://localhost:8080 -s pipeline.lua --latency -- / 1024 Running 15s test @ http://localhost:8080 8 threads and 128 connections Thread Stats Avg Stdev Max +/- Stdev Latency 39.40ms 27.47ms 299.49ms 63.91% Req/Sec 253.24k 46.20k 468.96k 70.92% Latency Distribution 50% 37.81ms 75% 63.08ms 90% 92.35ms 99% 0.00us 30424648 requests in 15.09s, 3.77GB read Requests/sec:

Transfer/sec: 255.66MB wangkai@wangkai-OptiPlex-9020:/var/apps/git-rep/microservices-framework-benchmark$ `

wangkaish commented 7 years ago

@stevehu this is test result

stevehu commented 7 years ago

Impressive! Have you tried to use 16? what is your CPU and how many cores and threads?

wangkaish commented 7 years ago

I have tried to use 16 ,the cpu usage is always between 0 and 1 percent ,My CPU is "I7 4790"