Open switchtrue opened 8 years ago
Hi @mleonard87
Thanks for taking time to look into the performance of graphql-go
, this is excellent! ππ
I'm glad that someone is taking up the challenge to figure out weak points in the library, it helps to steer the direction of the development.
The code for both graphql-go
and express-graphql
seems fair, at first glance π
Edit: Probably it would help to state the version of graphql-js
that was used for the benchmark. Currently graphql-go
is equivalent to v0.4.3
of graphql-js
(latest v0.4.18
).
Other information such as version of NodeJS, Go, express would probably would be nice as well.
Regarding areas of improvements, I can offer some notes that I already have that would help with the effort in improving and optimising the performance.
libgraphqlparser
structs with existing graphql-go
structs, but it can be done.visitor
and validator
are currently non-performant. This partly will be addressed in the PR #117, hopefully. (still WIP, I need to find more time to work on this, its about 30% done).
validator
will be able to run validation concurrently vs sequentially at the momentvisitor
will be able to visit nodes in parallelBoth improvements to the visitor
and validator
are already in that PR branch, perhaps could you try to run the benchmarks on that branch to see if there are any improvements, if any?
Again, we appreciate that work you put into this, we welcome your contribution very much!
Cheers
Yeah, absolutely, I can run these tests again on your branch later when I get home. I'll also clarify my express-graphql
versions and benchmark against like-for-like. I believe this was against v0.4.18
,
Also, has there been any discussion on caching the results of the validate/parse? I think that in a lot of applications the same query my be executed often. For example, in a Todo app the main page might fetch a list of all the Todos and the graphql query itself would be the same each time even if the results are different. Have you seen this in any other graphql implementations?
There is a graphql lib, which is based on libgraphqlparser, https://github.com/tallstreet/graphql, probably not active anymore. One concern I have is using it in sandboxed cloud services like google app engine, which (used to) restricts the use of cgo
. Please consider other drawbacks compared to pure go version.
@sogko
Here are benchmarks against the same code for different versions of graphql-go
and grapql-js
. I've repeated the ones from above today as well to try and keep things consistent.
I ran each test 5 times and the full results can be found here but I've included just the best for each below. They don't vary enough between tests to worry about.
Things to note:
sogko/0.4.18
branch has absolutely smashed it. Significantly faster than the fastest graphql-js
test I've seen and with usually far fewer errors.graphql-js 0.4.18
is faster than grapql-js 0.4.3
graphql-go master
is still slower than both graphql-js
versions.Overall your new branch is showing incredible performance - totally was't expecting this. Amazing!
go1.6 darwin/amd64
v5.5.0
4.13.4
$ ./wrk -t12 -c400 -d30s --timeout 10s "http://localhost:3002/graphql?query={hello}"
Running 30s test @ http://localhost:3002/graphql?query={hello}
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 206.25ms 33.05ms 545.89ms 80.82%
Req/Sec 99.13 90.37 455.00 83.64%
34701 requests in 30.09s, 7.71MB read
Socket errors: connect 157, read 38, write 0, timeout 0
Requests/sec: 1153.33
Transfer/sec: 262.43KB
$ ./wrk -t12 -c400 -d30s --timeout 10s "http://localhost:3003/graphql?query={hello}"
Running 30s test @ http://localhost:3003/graphql?query={hello}
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 213.99ms 216.00ms 2.17s 85.37%
Req/Sec 137.53 67.52 350.00 65.51%
35429 requests in 30.10s, 4.87MB read
Socket errors: connect 157, read 20, write 1, timeout 0
Requests/sec: 1177.02
Transfer/sec: 165.52KB
$ ./wrk -t12 -c400 -d30s --timeout 10s "http://localhost:3002/graphql?query={hello}"
Running 30s test @ http://localhost:3002/graphql?query={hello}
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 171.13ms 32.39ms 789.92ms 90.02%
Req/Sec 119.80 80.91 333.00 66.08%
41475 requests in 30.10s, 9.22MB read
Socket errors: connect 157, read 172, write 5, timeout 0
Requests/sec: 1377.75
Transfer/sec: 313.49KB
$ ./wrk -t12 -c400 -d30s --timeout 10s "http://localhost:3003/graphql?query={hello}"
Running 30s test @ http://localhost:3003/graphql?query={hello}
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 45.52ms 42.43ms 508.00ms 70.84%
Req/Sec 613.51 302.63 1.36k 68.96%
164704 requests in 30.10s, 22.62MB read
Socket errors: connect 157, read 128, write 0, timeout 0
Requests/sec: 5472.34
Transfer/sec: 769.55KB
From @bbuck: One concern I have is using it (libgraphqlparser) in sandboxed cloud services like google app engine, which (used to) restricts the use of cgo. Please consider other drawbacks compared to pure go version.
@bbuck That is an interesting insight, we have to keep this in mind (to use or not to use cgo) and figure out how to go about doing this.
Hi @mleonard87
Thanks for running more benchmark tests for the different configurations, really appreciate the time you put into this π
Woah, those results seems really promising, I'm quite surprised myself lol. Now this is making me wonder how does this library fare against others on other platforms (graphql-ruby, sangria etc)
In the future, we could possibly have a separate repo within graphql-go
org for benchmark results and the code used for different platforms, probably similar to https://github.com/julienschmidt/go-http-routing-benchmark
Probably something like github.com/graphql-go/benchmarks
/cc @chris-ramon
A benchmark repo is not a bad idea. Probably need something more complex than my very trivial hello world test case.
I had wondered myself about other libraries. I might them a go when I get some spare time.
If these times can be maintained once PR #117 is complete then I think this can be a blazingly fast library.
I have checked the performance too and found that the current implementation of the Lexer produces a lot of garbage and slows everything by orders of magnitudes. For details and a possible fix see here: #137
@Matthias247 Nice tip got to say its best to use bytes.Buffer in golang then to use strings since you can pool and reuse them. We use them a lot and even fast frameworks like https://github.com/valyala/fasthttp and https://github.com/labstack/echo use them to get the highest speed.
Anyway I ran the benchmark on my machine against our implementation of graphql, and here it is,
wrk -t12 -c400 -d30s --timeout 10s "http://localhost:3003/graphql?query={hello}"
Running 30s test @ http://localhost:3003/graphql?query={hello}
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 134.97ms 163.47ms 1.85s 86.12%
Req/Sec 372.46 236.09 1.58k 70.99%
133607 requests in 30.05s, 18.35MB read
Requests/sec: 4445.99
Transfer/sec: 625.22KB
wrk -t12 -c400 -d30s --timeout 10s "http://localhost:3003/graphql?query={hello}"
Running 30s test @ http://localhost:3003/graphql?query={hello}
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 34.89ms 43.72ms 518.00ms 87.58%
Req/Sec 1.44k 0.90k 6.10k 81.35%
514095 requests in 30.05s, 70.60MB read
Requests/sec: 17108.13
Transfer/sec: 2.35MB
And BTW shouldn't these be in benchmark tests in the library and not like this then we could figure out allocs/sec and ops/s also. And here's the code,
package main
import (
"encoding/json"
"fmt"
"net/http"
"github.com/playlyfe/go-graphql"
)
func main() {
schema := `
type RootQueryType {
hello: String
}
`
resolvers := map[string]interface{}{}
resolvers["RootQueryType/hello"] = func(params *graphql.ResolveParams) (interface{}, error) {
return "world", nil
}
context := map[string]interface{}{}
variables := map[string]interface{}{}
executor, err := graphql.NewExecutor(schema, "RootQueryType", "", resolvers)
if err != nil {
panic(err)
}
http.HandleFunc("/graphql", func(w http.ResponseWriter, r *http.Request) {
result, err := executor.Execute(context, r.URL.Query()["query"][0], variables, "")
if err != nil {
panic(err)
}
json.NewEncoder(w).Encode(result)
})
fmt.Println("Benchmark app listening on port 3003!")
http.ListenAndServe(":3003", nil)
}
I did a benchmark without the http overhead and using the go test tool and this is what i get,
BenchmarkGoGraphQLMaster-4 10000 230846 ns/op 29209 B/op 543 allocs/op
BenchmarkPlaylyfeGraphQLMaster-4 50000 27647 ns/op 3269 B/op 61 allocs/op
Here's the code,
package graphql_test
import (
"testing"
"github.com/graphql-go/graphql"
pgql "github.com/playlyfe/go-graphql"
)
var schema, _ = graphql.NewSchema(
graphql.SchemaConfig{
Query: graphql.NewObject(
graphql.ObjectConfig{
Name: "RootQueryType",
Fields: graphql.Fields{
"hello": &graphql.Field{
Type: graphql.String,
Resolve: func(p graphql.ResolveParams) (interface{}, error) {
return "world", nil
},
},
},
}),
},
)
func BenchmarkGoGraphQLMaster(b *testing.B) {
for i := 0; i < b.N; i++ {
graphql.Do(graphql.Params{
Schema: schema,
RequestString: "{hello}",
})
}
}
var schema2 = `
type RootQueryType {
hello: String
}
`
var resolvers = map[string]interface{}{
"RootQueryType/hello": func(params *pgql.ResolveParams) (interface{}, error) {
return "world", nil
},
}
var executor, _ = pgql.NewExecutor(schema2, "RootQueryType", "", resolvers)
func BenchmarkPlaylyfeGraphQLMaster(b *testing.B) {
for i := 0; i < b.N; i++ {
context := map[string]interface{}{}
variables := map[string]interface{}{}
executor.Execute(context, "{hello}", variables, "")
}
}
thanks for @pyros2097, I add graph-gophers/graphql-go in benchnark lint. See the repo golang-graphql-benchmark
BenchmarkGoGraphQLMaster-4 20000 84131 ns/op 27254 B/op 489 allocs/op
BenchmarkPlaylyfeGraphQLMaster-4 200000 7531 ns/op 2919 B/op 59 allocs/op
BenchmarkGophersGraphQLMaster-4 200000 5041 ns/op 3909 B/op 39 allocs/op
code:
package graphql_test
import (
"context"
"testing"
ggql "github.com/graph-gophers/graphql-go"
"github.com/graphql-go/graphql"
pgql "github.com/playlyfe/go-graphql"
)
var schema, _ = graphql.NewSchema(
graphql.SchemaConfig{
Query: graphql.NewObject(
graphql.ObjectConfig{
Name: "RootQueryType",
Fields: graphql.Fields{
"hello": &graphql.Field{
Type: graphql.String,
Resolve: func(p graphql.ResolveParams) (interface{}, error) {
return "world", nil
},
},
},
}),
},
)
func BenchmarkGoGraphQLMaster(b *testing.B) {
for i := 0; i < b.N; i++ {
graphql.Do(graphql.Params{
Schema: schema,
RequestString: "{hello}",
})
}
}
var schema2 = `
type RootQueryType {
hello: String
}
`
var resolvers = map[string]interface{}{
"RootQueryType/hello": func(params *pgql.ResolveParams) (interface{}, error) {
return "world", nil
},
}
var executor, _ = pgql.NewExecutor(schema2, "RootQueryType", "", resolvers)
func BenchmarkPlaylyfeGraphQLMaster(b *testing.B) {
for i := 0; i < b.N; i++ {
context := map[string]interface{}{}
variables := map[string]interface{}{}
executor.Execute(context, "{hello}", variables, "")
}
}
type helloWorldResolver1 struct{}
func (r *helloWorldResolver1) Hello() string {
return "world"
}
var schema3 = ggql.MustParseSchema(`
schema {
query: Query
}
type Query {
hello: String!
}
`, &helloWorldResolver1{})
func BenchmarkGophersGraphQLMaster(b *testing.B) {
for i := 0; i < b.N; i++ {
ctx := context.Background()
variables := map[string]interface{}{}
schema3.Exec(ctx, "{hello}", "", variables)
}
}
Hi, so we are using this library at work and were looking at a certain optimisation.
In graphql-go, the default limit of the maximum number of resolvers per request allowed to run in parallel was 10, which now is increased and passed as an option during the initialisation of schema. Performance improvements and other impacts has to be find out by load testing.
We've increased maximum number of resolvers per request to 50 as of now. Quick question though, was there a specific reason why graphql-go
limit was set to 10? Or was it a limit that was started off with and hasn't been experimented with?
Thank you again for creating an amazing library for people to work with.
Hi, so we are using this library at work and were looking at a certain optimisation.
In graphql-go, the default limit of the maximum number of resolvers per request allowed to run in parallel was 10, which now is increased and passed as an option during the initialisation of schema. Performance improvements and other impacts has to be find out by load testing.
We've increased maximum number of resolvers per request to 50 as of now. Quick question though, was there a specific reason why
graphql-go
limit was set to 10? Or was it a limit that was started off with and hasn't been experimented with?Thank you again for creating an amazing library for people to work with.
Hi @salman-bhai, we don't actually set a maximum number of resolvers per request to run in parallel
within graphql-go/graphql
we actually don't have a limit.
Perhaps are you referring to the limitation of a different library: graph-gophers/graphql-go
?
According to #106 performance has not been a key concern at the moment over functionality, with which I completely agree. However, out of interest I started running some simple and contrived load testing using wrk. I compared an absolutely barebones graphql setup in graphql-go and express-graphql (code in gists linked to below) and here are some results for a test with 12 threads, 400 connections over a 30 second period.
Notable points are:
Given the claim of no real optimisations so far I think this is an excellent starting point, especially given the significantly lower failure rate. However, I think for such a trivial test case the timing should be must closer.
I'm pretty new to go but I'm looking to further my skills. I'm going to try and tackle some of the other open issues first but I would like to come back to this and help where I can. Perhaps in the meantime we could start a discussion on how to improve and discover some areas of code that could be investigated.
I have also attached a flame graph at the bottom that was sampled for 15 seconds during the middle a 30 second load test. This indicates that most of the time spent in graphql-go is spent inside
graphql.ValidateDocument()
and specificallyvistor.Visit()
Query Used
express-graphql
graphql-go
Code
graphql-go express-graphql
Flame Graph