Closed dcu closed 8 years ago
@dcu maybe, it looks like a simple binary tree. anyway I can't find how to run the benchmarks by myself, because the iris developer didn't forked go-http-bechmarks with iris included.
@dcu I just found a way to run the tests:
BenchmarkGin_GithubAll 20000 74933 ns/op 0 B/op 0 allocs/op
BenchmarkIris_GithubAll 20000 61231 ns/op 0 B/op 0 allocs/op
the difference is not that big.
anyway, I should check out eventual opportunities for optimizations.
@manucorporat nice work
It uses a cache to inflate it's results. They were substantially different when I added it into the go-http-router-benchmark and measured it without the cache (so actually measuring the router itself) but slower in either case.
Caching disabled BenchmarkGin_GithubAll 50000 36235 ns/op 0 B/op 0 allocs/op BenchmarkIris_GithubAll 10000 132241 ns/op 0 B/op 0 allocs/op
Caching enabled BenchmarkGin_GithubAll 50000 35648 ns/op 0 B/op 0 allocs/op BenchmarkIris_GithubAll 30000 42653 ns/op 0 B/op 0 allocs/op
https://www.reddit.com/r/golang/comments/4a8yit/is_this_the_fastest_go_web_framework/d0z0d42
@CaptainCodeman I test the benchmark again.
Caching disabled
$ go test -bench=. -benchmem=true -benchtime=10s
Gin: 52464 Bytes
#GithubAPI Routes: 203
Iris: 80520 Bytes
PASS
BenchmarkGin_GithubAll 200000 63869 ns/op 0 B/op 0 allocs/op
BenchmarkIris_GithubAll 300000 47437 ns/op 0 B/op 0 allocs/op
ok github.com/kataras/iris 28.160s
Caching enabled
$ go test -bench=. -benchmem=true -benchtime=10s
Gin: 54256 Bytes
#GithubAPI Routes: 203
Iris: 81784 Bytes
PASS
BenchmarkGin_GithubAll 200000 62737 ns/op 0 B/op 0 allocs/op
BenchmarkIris_GithubAll 500000 26655 ns/op 0 B/op 0 allocs/op
ok github.com/kataras/iris 26.881s
As I explained in the reddit thread several times, I was seeing a huge discrepancy between the results I got when I ran the tests in the Iris project vs when I ran them in the go-http-router-benchmark project - for both Gin and Iris so I didn't find them convincing. For a valid comparison to be made I think the author really needs to add his router to the go-http-router-benchmark project.
I'm also not interested in "caching enabled" results as those don't actually test the router, just the hash lookup (?) of the cache and IMO there are already better front-end caching solutions available for that.
@CaptainCodeman
Could you provide go-http-router-benchmark
results or testing source code? I am gonna get to know what's different between Iris
project and go-http-router-benchmark
.
Sorry @appleboy, after the last update it started failing the unit tests and as the author was unwilling to accept the results anyway so I didn't see any point bothering with it anymore so deleted it all.
@CaptainCodeman Thanks for your explanation.
@manucorporat @CaptainCodeman @appleboy @dcu It's not as fast as the author portrays, there is the pull request I just put in to bring it back in line: https://github.com/kataras/iris/pull/14
and updated the reddit posting: https://www.reddit.com/r/golang/comments/4a8yit/is_this_the_fastest_go_web_framework/d15n2oa
@kataras Why you closed https://github.com/kataras/iris/pull/14 issues? If someone like me want to test iris framework, he/she will get wrong benchmark results.
By the way, you must to change benchmark result on iris project readme and update benchmark chart on iris website.
Thanks.
@kataras The handler for your benchmark (Iris) was still empty and you're comparing different things here. The benchmarks should be comparing routers, not routers with cached enabled. Regardless, I wish you the best with your project and I'm requesting we close this issue since the title is not correct. Maybe a better issue would be Implement optional caching like:
r := gin.Default()
r.SetCacheEnabled(true)
or something of that nature.
I agree - this approach to caching is overly simplistic, is comparing apples and oranges and has already been solved in better ways (e.g. varnish or nginx, CDN + origin + http cache headers) in ways that won't fall over for non-trivial apps and traffic.
I'd still like to see the router added to the 'official' benchmark app for fairer comparison because the results I got were very different (even for gin) than running things from your codebase.
@kataras You also need to learn to accept criticism or people pointing out errors in your approach - people telling you the results they get doesn't make them liars or "paid" (who do you think is paying them exactly and why?)
don't waste your breath @CaptainCodeman I gave the same advice in his repo and all he did was "locked and limited conversation to collaborators" then deleted his reddit account and edited his above comment to claim he doesn't have one.
sorry everyone for the noise.
Yeah, I wanted to leave it as a warning for anyone who may be suckered by the bogus claims. There's no convincing some people that they haven't found a way to turn lead into gold. LOL
He send the iris benchmark PR: https://github.com/julienschmidt/go-http-routing-benchmark/pull/57
It's comparable to Echo / Gin but for some reason he's still insisting on trying to artificially inflate his numbers by using the cache which if the other routers did the same would all produce near identical results ... and of course blow up in real use where parameters vary, memory isn't unlimited and caching isn't quite so simple.
Result
$ go test -bench="Echo|Gin|Iris"
#GithubAPI Routes: 203
Echo: 76312 Bytes
Gin: 52464 Bytes
Iris: 60680 Bytes
#GPlusAPI Routes: 13
Echo: 7112 Bytes
Gin: 3856 Bytes
Iris: 4776 Bytes
#ParseAPI Routes: 26
Echo: 8032 Bytes
Gin: 6816 Bytes
Iris: 8304 Bytes
#Static Routes: 157
Echo: 61008 Bytes
Gin: 30400 Bytes
Iris: 35608 Bytes
PASS
BenchmarkEcho_Param 20000000 91.4 ns/op 0 B/op 0 allocs/op
BenchmarkGin_Param 20000000 83.2 ns/op 0 B/op 0 allocs/op
BenchmarkIris_Param 20000000 85.4 ns/op 0 B/op 0 allocs/op
BenchmarkEcho_Param5 10000000 161 ns/op 0 B/op 0 allocs/op
BenchmarkGin_Param5 10000000 140 ns/op 0 B/op 0 allocs/op
BenchmarkIris_Param5 10000000 142 ns/op 0 B/op 0 allocs/op
BenchmarkEcho_Param20 3000000 463 ns/op 0 B/op 0 allocs/op
BenchmarkGin_Param20 5000000 364 ns/op 0 B/op 0 allocs/op
BenchmarkEcho_ParamWrite 10000000 217 ns/op 8 B/op 1 allocs/op
BenchmarkGin_ParamWrite 10000000 182 ns/op 0 B/op 0 allocs/op
BenchmarkIris_ParamWrite 10000000 181 ns/op 0 B/op 0 allocs/op
BenchmarkEcho_GithubStatic 10000000 106 ns/op 0 B/op 0 allocs/op
BenchmarkGin_GithubStatic 20000000 105 ns/op 0 B/op 0 allocs/op
BenchmarkIris_GithubStatic 20000000 113 ns/op 0 B/op 0 allocs/op
BenchmarkEcho_GithubParam 10000000 179 ns/op 0 B/op 0 allocs/op
BenchmarkGin_GithubParam 10000000 165 ns/op 0 B/op 0 allocs/op
BenchmarkIris_GithubParam 10000000 182 ns/op 0 B/op 0 allocs/op
BenchmarkEcho_GithubAll 30000 40971 ns/op 0 B/op 0 allocs/op
BenchmarkGin_GithubAll 50000 36663 ns/op 0 B/op 0 allocs/op
BenchmarkIris_GithubAll 50000 36879 ns/op 0 B/op 0 allocs/op
BenchmarkEcho_GPlusStatic 20000000 87.9 ns/op 0 B/op 0 allocs/op
BenchmarkGin_GPlusStatic 20000000 83.5 ns/op 0 B/op 0 allocs/op
BenchmarkIris_GPlusStatic 20000000 86.0 ns/op 0 B/op 0 allocs/op
BenchmarkEcho_GPlusParam 20000000 115 ns/op 0 B/op 0 allocs/op
BenchmarkGin_GPlusParam 10000000 109 ns/op 0 B/op 0 allocs/op
BenchmarkIris_GPlusParam 20000000 111 ns/op 0 B/op 0 allocs/op
BenchmarkEcho_GPlus2Params 10000000 160 ns/op 0 B/op 0 allocs/op
BenchmarkGin_GPlus2Params 10000000 140 ns/op 0 B/op 0 allocs/op
BenchmarkIris_GPlus2Params 10000000 149 ns/op 0 B/op 0 allocs/op
BenchmarkEcho_GPlusAll 500000 2364 ns/op 0 B/op 0 allocs/op
BenchmarkGin_GPlusAll 1000000 1896 ns/op 0 B/op 0 allocs/op
BenchmarkIris_GPlusAll 1000000 2021 ns/op 0 B/op 0 allocs/op
BenchmarkEcho_ParseStatic 20000000 89.7 ns/op 0 B/op 0 allocs/op
BenchmarkGin_ParseStatic 20000000 85.4 ns/op 0 B/op 0 allocs/op
BenchmarkIris_ParseStatic 20000000 83.5 ns/op 0 B/op 0 allocs/op
BenchmarkEcho_ParseParam 20000000 101 ns/op 0 B/op 0 allocs/op
BenchmarkGin_ParseParam 20000000 90.8 ns/op 0 B/op 0 allocs/op
BenchmarkIris_ParseParam 20000000 90.6 ns/op 0 B/op 0 allocs/op
BenchmarkEcho_Parse2Params 10000000 124 ns/op 0 B/op 0 allocs/op
BenchmarkGin_Parse2Params 20000000 111 ns/op 0 B/op 0 allocs/op
BenchmarkIris_Parse2Params 20000000 108 ns/op 0 B/op 0 allocs/op
BenchmarkEcho_ParseAll 300000 4045 ns/op 0 B/op 0 allocs/op
BenchmarkGin_ParseAll 300000 3350 ns/op 0 B/op 0 allocs/op
BenchmarkIris_ParseAll 500000 3381 ns/op 0 B/op 0 allocs/op
BenchmarkEcho_StaticAll 50000 29023 ns/op 0 B/op 0 allocs/op
BenchmarkGin_StaticAll 50000 26319 ns/op 0 B/op 0 allocs/op
BenchmarkIris_StaticAll 50000 26179 ns/op 0 B/op 0 allocs/op
ok github.com/kataras/go-http-routing-benchmark 88.321s
This is something I don't really like so much about the Golang community, everyone seems to be obsessed about who has the fastest router. Who cares? Routing is only a tiny portion of the request time, once you start building real application logic on top, it wont matter anymore what router you chose, but the feature set becomes more important. Can we close this issue already... please?
@robvdl sure! 😉
@robvdl you mad because Iris is faster.
@varyoo I doubt it, I just know that routing is such a small portion of the request time, it's hardly worth the bother. How long have you been programming web apps? Sounds like you are a beginner just getting into Go, still chasing the dragon of who has the fastest router which is really pointless. Have you coded any large web apps, how many big projects have you completed in your lifetime? Most of the request time goes towards DB queries, business logic, but not so much things like routing and template rendering.
Please leave it be, this ticket is closed, go use Iris if you think it is faster, just Go right ahead but please stop posting over and over on this same ticket about Iris being faster.
@robvdl actually took the bait.
@robvdl @varyoo @CaptainCodeman @appleboy You have right, this is the reason which this benchmark suite covers common-situations like these: https://github.com/smallnest/go-web-framework-benchmark . Have a nice day and thanks for the support!
@manucorporat could you update this issue? please, I want to know the latest update of benchmarking. Thanks
@kataras I'm falling in love with your broken english
@ar3s3ru why in this f**ng day, native english speaker people or people that has perfect capability with their english skill, so rude with non native speaker? could you even speak russia? or chinese maybe? I would falling in love if you can speak them perfectly.
Ad hominem everywhere... Come on dude, that's non constructive discussion..
@raitucarp I think you took my words too seriously, just chill out dude
@ar3s3ru Despite my english, I don't think that a racist's opinion should be considered (at least by me).
The best thing you can do to your self ,right now, is to ask sorry & star Iris (this is the only thing matters to me from your side).
I 'm (hard-core) loving you too, with no offence, just chilling out as you proposed.
Thanks for using (or talking about) Iris!
@kataras oh, so now I'm racist? How so? (that escalated quickly, cit.)
I have nothing to be sorry of, and nothing to star Iris for.
I mean, if it can get things done, I'm all for it, but there are better alternatives out there. Just I don't like how you handle the public (opposite) opinion: you are too much opinionated for my tastes.
Then again, the router is just a tiny part of the request time - it's all about queries optimization. Anyway, thanks to you! :)
@ar3s3ru
Then again, the router is just a tiny part of the request time
Who spoke about router only, the benchmark suite covers the response time (database query, and so on), any way.
but there are better alternatives out there
Can you saw me? If you find a better alternative I am stopping iris right now.
oh, so now I'm racist?
A racist will never admit that he is racist.
I have nothing to be sorry of
You're just unrepentant, I gave you a chance but you didn't get it, have a good night.
perhaps gin can borrow some ideas to make the framework faster:
https://github.com/kataras/iris#benchmarks