erikdubbelboer / fasthttp

This fork has been deprecated!
MIT License
50 stars 12 forks source link

Why not include CoarseTime? #45

Closed dgrr closed 6 years ago

dgrr commented 6 years ago

Hello contributors.

A few days ago I decided to follow developing a few enhancements for fasthttp (as http2 or cookiejar for clients). This is why I started making some benchmarks with wrk comparing valyala's fasthttp and the erik's fork. The best results that I got are the following:

Vayala's fasthttp:

Running 5s test @ http://localhost:8080
  1 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     3.33ms    2.09ms  15.79ms   67.53%
    Req/Sec   648.75k    63.40k  805.09k    74.10%
  3095472 requests in 5.00s, 433.95MB read
Requests/sec: 618930.88
Transfer/sec:     86.77MB

Erik's fasthttp:

Running 5s test @ http://localhost:8080
  1 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     3.98ms    2.59ms  45.19ms   68.01%
    Req/Sec   561.02k    54.54k  687.27k    69.25%
  2674872 requests in 5.00s, 374.99MB read
Requests/sec: 534888.60
Transfer/sec:     74.99MB

This benchmark have been done in a DualCore (I know it is not the best computer to bench this).

After seeing the result I searched the differences between both libraries. One of this is the CoarseTimeNow function.

As Erik specified here this function does not provide a real speedup. These are the results of implementing CoarseTimeNow in Erik's fork:

Running 5s test @ http://localhost:8080
  1 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     3.87ms    3.03ms  85.61ms   79.04%
    Req/Sec   570.13k    52.40k  669.00k    71.79%
  2729952 requests in 5.00s, 382.71MB read
Requests/sec: 546018.25
Transfer/sec:     76.55MB

As you can see there are a little enhancement. Searching about this function I remember that I reply to another user that enhanced this function (can see here) Implementing bronze1man's code I got the following result:

Running 5s test @ http://localhost:8080
  1 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     3.74ms    2.44ms  21.12ms   68.30%
    Req/Sec   587.20k    51.72k  708.00k    70.57%
  2805288 requests in 5.00s, 393.27MB read
Requests/sec: 561012.49
Transfer/sec:     78.65MB

As you can see there are an enhancement that can mark the difference between the first benchmark and the last one.

illotum commented 6 years ago

All your benchmarks have ~ 70% deviation. Additionally, why include I/O if you're comparing just Time?

Try running it with less noise, or do a -count=N microbenchmark and compress the results with https://godoc.org/golang.org/x/perf/cmd/benchstat

dgrr commented 6 years ago

No, this is the result of wrk not the server. There are no such I/O operations.

erikdubbelboer commented 6 years ago

I think the changes that affect the speed the most are https://github.com/erikdubbelboer/fasthttp/commit/68b32a0d51e814d5ba73c198ffe0f239ffa53573 and https://github.com/erikdubbelboer/fasthttp/commit/477f2120b4121afe5c032aeec5d3cadd43090f7d which are bug fixes that are required.

I don't think a difference between 3095472 and 2674872 requests per second is a big deal. Once you start doing something meaningful in your requests it's going to be much slower anyways. The most important thing of fasthttp is that it has way less heap allocations than net/http as this affects more than only speed.

erikdubbelboer commented 6 years ago

Also CoarseTime introduced many bugs: https://github.com/valyala/fasthttp/issues/260, https://github.com/valyala/fasthttp/issues/311, https://github.com/valyala/fasthttp/issues/261 and https://github.com/valyala/fasthttp/issues/271 for example.

illotum commented 6 years ago

wrk benchmark is I/O. And erik's argument trumps it all tbh.

dgrr commented 6 years ago

Oh, you are right! I did not see the bugs that CoarseTime causes.