http-rs / tide

Fast and friendly HTTP server framework for async Rust
https://docs.rs/tide
Apache License 2.0
5.06k stars 322 forks source link

Add benchmarks #65

Closed leaxoy closed 4 years ago

pickfire commented 5 years ago

Maybe it would be nice to add tide to https://github.com/TechEmpower/FrameworkBenchmarks?

yoshuawuyts commented 5 years ago

I think it's important to ask ourselves why we want benchmarks. For me I'd like them so we can work on tooling to make tests easier to write, but also allow us to profile performance regressions by just running the benches.

I have a few reservations with the techempower benchmark because it only measures numbers, but without emulating real-world deployment environments. For example: there's a fair chance Node.js might have the same throughput as a Rust server running on conventional hardware, because the network is the bottleneck. The only difference would be in the latency, which isn't measured.

That doesn't mean that the techempower benchmarks can't be valuable. But it's worth being cautious about engaging with it early in the development phase, as it has the potential to:

  1. unjustly provide us with a bad image if somehow don't do well in them.
  2. hold us back with API reforms or rebuilding structural components as they might change our benchmark scores, even if they're acceptable for production.

Sorry for a bit of an essay. I hope my perspective makes sense here. I think as a Tl;Dr: benchmarks sound cool, but we probably shouldn't compare ourselves to others quite yet, and think about what we want from them.

pickfire commented 5 years ago

@yoshuawuyts I understand that the benchmark would be interesting but since we are in early stage, it might affect our image later on. However, it is the same the other way around, since the project provides ergonomic API, if somehow the benchmark excels, somewhat it will greatly boosts the project image.

Like a double edge sword, it could be good or bad at the same time. Maybe we can do this, first we add benchmark somewhere else first, if it goes well then we only add it to the techempower. As well, having benchmarks could potentially find performance regression, maybe at first tide could be a lot faster but later on it became slow.

yoshuawuyts commented 5 years ago

@pickfire cool yeah, I think we're on the same page then! -- I'd probably propose starting with a benches/ repository in Tide so we can run cargo bench on it.

The question then becomes: what do we want to start benchmarking?

pickfire commented 5 years ago

@yoshuawuyts Nice! Maybe what is benchmarked in techempower could be used as an hello world for benchmarking, it would as well allow it to be ported over easier later on.

secretfader commented 5 years ago

I think general benchmarks that can be easily migrated to new versions of Tide would be helpful for internal use. That's a very different proposal from creating benchmarks for TechEmpower, who provide a category that each test must belong to.

The former is about closing the loop on technical decisions where performance is concerned, the latter about marketing. Both have value, but we're not yet in a position to reap values from a marketing-oriented benchmark, in my opinion.

pickfire commented 5 years ago

Would benchmark that acts as both benchmarks and tests and examples helps as well? Who knows if we could take all the code in examples and make it benchmarks without adding benchmarks.

secretfader commented 5 years ago

@pickfire That's a great suggestion. Like @yoshuawuyts, I have open questions about which benchmarks would be the most helpful, but I'm definitely in favor of reusing examples where possible.

If we can push up basic monitoring of endpoint performance, I think that's a good start. I know there's talk of introducing the concept of middleware "stacks" which is another prime candidate for benchmarking, as are the various Middleware implementations that utilize Context (now that #156 has landed).

prasannavl commented 5 years ago

I agree with all of what's said here. But just want to add that there's a marketing and framework discovery aspect that can be gained from techempower benchmarks, that also could help boost community engagement.

I personally have discovered many frameworks in the past being led onto from there (esp. for folks coming from other languages). And have also seen people use the same rationale to not get on board with techempower, finally only to see they're just missing out in most cases (.NET Core is a great example of this) -- at the end of the day, it helps provide some insights -- sure, the insight could be misconstrued without the correct understanding, but works largely to place the frameworks due to the different combinations of things that's in there. So, I think it'd definitely be valuable to add them -- though I'd probably prefer to tag it as community, so someone interested can pick it up and add the techempower ones.

Also, yes, of course, it is a double edged sword - but considering the heavy lifting is done by hyper, and runtime/tokio at the moment, I think should be hard to have performance not to match up/out-do/come-close to actix-web. If we don't then, that's also an indicator that there's something there that could use improvement.

(Now, if this was a nodejs web framework, I'd totally have a different view than above, but just being in rust on an async stack gives an inherent advantage that IMO should be capitalised -- considering we want to reduce barriers by making more people comfortable with the idea of writing web servers in rust)

rohitjoshi commented 4 years ago

I ran comparative benchmark between actix-web: basic example and tide: hello example.

TPS:

tide: 49596 
actix-web: 62979
warp: 55537 

tide

rjoshi:~/projects/wrk$ wrk -t4 -c 100  -d30s http://localhost:8080
Running 30s test @ http://localhost:8080
  4 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.67ms  736.10us  19.53ms   73.09%
    Req/Sec    12.47k     1.42k   16.48k    72.83%
  1490130 requests in 30.05s, 210.32MB read
Requests/sec:  49596.58
Transfer/sec:      7.00MB

actix-web:

rjoshi:~/projects/wrk$ wrk -t4 -c 100  -d30s http://localhost:8080
Running 30s test @ http://localhost:8080
  4 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.28ms    1.02ms  25.52ms   80.40%
    Req/Sec    15.86k     5.92k   27.75k    54.75%
  1895577 requests in 30.10s, 265.74MB read
Requests/sec:  62979.37
Transfer/sec:      8.83MB

warp:

rjoshi@:~/projects/wrk$ wrk -t4 -c 100  -d30s http://localhost:3030
Running 30s test @ http://localhost:3030
  4 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.65ms  747.95us  13.09ms   69.64%
    Req/Sec    13.96k     1.53k   17.86k    66.75%
  1667228 requests in 30.02s, 206.70MB read
Requests/sec:  55537.82
Transfer/sec:      6.89MB
yoshuawuyts commented 4 years ago

@rohitjoshi thanks for running numbers; glad we're within 10-30% of performance of other Rust frameworks without spending much time optimizing; that's not too bad.

I'm going to go ahead and close this. I still stand by what I said in April: it's nice to have benchmarks, but mostly to track our own performance numbers rather than to compare how we fare against others.