Open neelance opened 7 years ago
looks much simpler and easier for new users and looks great. I use relay "Node" interface a lot and not sure how much headache would it be not having validation. But, I am ok with the tradeoff considering the overall simplification. Thanks for your continuing effort on improving the library. Cheers.
This looks like a good way to lower the entry to barrier.
Would the existing way to define resolvers still be supported?
It could be that I'm used to the current method of writing resolvers, but I have a few concerns about doing away with the existing method and going only with this approach:
However, the only drawback I can think of if this method of specifying resolvers lives alongside the existing is that multiple ways to do the same things can be confusing.
Would the existing way to define resolvers still be supported?
No, but you can emulate it via https://golang.org/ref/spec#Method_expressions.
- would force the creation of API-specific models if the underlying data does not match the GraphQL schema.
What do you mean by that?
- would make it harder to unit test, unless the functions are named.
- function names would have to be more verbose (maybe include the name of the object being resolved?) because we lose the ability to attach functions to types
I think you can still have them attached to a type by using https://golang.org/ref/spec#Method_expressions.
- it becomes a bit unclear how to organize graphql-related code inside your project (what if I want the registration to be separate from the function definitions?)
Please also explain some more.
I like it!
However, I feel that the downside is that if there is a missing resolver, it might not be caught.
For example, if I just have a schema, and have no resolvers, will the missing resolver only be caught during runtime?
s := graphql.ParseSchema(schemaIDL)
// no resolvers here.
Missing resolvers will throw an error at application startup, just like they do right now.
No, but you can emulate it via https://golang.org/ref/spec#Method_expressions.
Okay, that's perfect! I didn't realize methods could be used like this. This would remain largely the same as existing in this case.
would force the creation of API-specific models if the underlying data does not match the GraphQL schema
From the example given, it looks like the functions return the actual type rather than a resolver.
Here's an example
// This type seems to be specific to the API, used instead of resolver wrapper
type starship struct {
ID graphql.ID
Name string
Length float64
}
// Imagine this is the type contained by starshipData
type legacyStarship struct {
ID int
BrandName string
Model string
Length float32
}
s.Resolver("Query", "starship", func(r *Resolver, args *struct{ ID graphql.ID }) *starship {
legacy := starshipData[args.ID]
return &starship{
ID: graphql.ID(strconv.Itoa(legacy.ID)), // previously each of these would have a function that contains this logic
Name: strings.Join(" ", legacy.BrandName, legacy.Model),
Length: float64(legacy.Length),
}
})
Is it the case that you can no longer return a resolver type? If so, do we lose the ability to lazy-load fields?
it becomes a bit unclear how to organize graphql-related code inside your project (what if I want the registration to be separate from the function definitions?)
This is a moot point since I can just use method expressions, like you mentioned.
Is it the case that you can no longer return a resolver type?
Sure you can. Why shouldn't it be possible?
Ah I was confused... This seems great! I don't have any more concerns. Out of curiosity are there any performance implications to using this method?
I did not do any performance tests yet. However, ResolverField
might be faster because it does no call.
It would be nice if you could define multiple field resolvers together to make it easier to keep track of the fields defined for your resolver
// graphql.FieldMap = map[string]string
s.ResolveFields("Droid", graphql.FieldMap{
"id": "ID",
"name": "Name",
"appearsIn": "AppearsIn",
"primaryFunction": "PrimaryFunction",
})
or use struct tags
type droid struct {
ID graphql.ID `gql:"id"`
Name string `gql:"name"`
Friends []graphql.ID
AppearsIn []string `gql:"appersIn"`
PrimaryFunction string `gql:"primaryFunction"`
}
If the struct tag route is up for discussion it wold be nice to just reuse json tags too if possible
Thanks for all the great work!
How do you plan to thread context?
@euforic I agree that such helpers may be nice. They should be easy to add on top of the simpler base API.
@nathanborror Sorry, but I don't understand your question.
I much, much, prefer the struct-based approach.
From your list of advantages:
- API easier to learn
I don't think it's easier. The API currently is essentially 2 functions (from the examples):
graphql.ParseSchema(starwars.Schema, &starwars.Resolver{})
http.Handle("/query", &relay.Handler{Schema: schema})
And graphql.ID
for resolvers That's about it.
In fact the API is very clean, and doesn't "overreach" in the app code. One way to see this is that the only dependency on graphql for the big chunk of the example code, is on graphql.ID
. And the server implementation only calls graphql.ParseSchema
and relay.Handler
. That's it.
- clean shortcuts like ResolverField
No shortcuts is cleaner. But I agree that for simple fields, maybe some json-like annotations like it was suggested in this issue and another one, could make things a tiny bit easier. That's also cleaner than ResolverField I think.
You suggested that method expressions can be used. Yes, but instead of just adding a method to a struct (which I need anyway), now I also have to add one line of code, calling ResolverField with string arguments.
To quote @neelance on a related discussion ;)
I think that the following happened a lot during the design of the Go language:
Is it possible? Yes.
Would it give the option to write less code? Yes.
Should we add it? Probably not.
(Which made me laugh, because I think it's probably true :) )
And I think this that ResolverField one of those too. You save 2 lines of code with the shortcut, but it's not cleaner than the function which gives you more flexibility.
- most type checking at startup is still possible
"most" is not as good, so this is a step backwards (and a big one in my opinion)
- explicit name mapping between GraphQL and Go
- automatic handling of GraphQL type assertions
I either don't really understand what you mean, or don't see the difference/improvement with how things are now. Passing strings as function arguments don't map any better than function names (ok, they match the case). In fact to me those strings are at least as magical as method names. If we were using something like ResolveField(schema.Human, schema.Friends, &Human.Friends) maybe, but again, this is just repeating what the Human.Friends struct/function is saying currently.
The other thing is that code organization is a bit more messy. The initSchema function is huge, and must have a direct dependency on every piece of the implementations, all the resolvers, all the fields. Of course I could pass around s
, but it's still not as clean.
Ultimately, there is a much tighter link between the app code and the graphql-go library. Testing is not as easy (before, my tests would pretty much not know about graphql-go, just import my structs and my resolvers, and use that directly). Testing was mentioned before by @tonyghita, but another thing to add is that now testing has to include this initSchema function, and the implementation of my resolvers. Before it was only the resolvers.
But losing some type safety is sort of the biggest issue I have. Actually, I think it's already a pretty big problem that some of that checking is done at startup time, and not build-time. It is a tradeoff to make the API simpler (pass the schema as a string, and not an object that you have to build). Ideally it should be possible to build the schema only from the top level resolver struct (api would look like graphql.BuildSchema(&starwars.Resolver{})
as long as it doesn't require some sort of schemaBuilder.Object
like here)
Obviously all of these arguments can become moot with a single helper function that does reflection, and makes all the s.Resolver() and s.ResolverField() calls for me (which is roughly what is happening currently)
@yohcop Thanks a lot for your feedback. Some thoughts:
About the type safety: It is really only about GraphQL interfaces. I also wasn't fully happy about how they worked before, it was more complicated than it needs to be.
About ResolverField
: This is not only a shortcut, but performance relevant. Calling a function is much more expensive than reading a field. ResolverField
will not just be a wrapper around Resolver
.
About your "single helper function that does reflection": That won't work easily. graphql-go currently traverses the schema and the Go types in parallel to figure out mappings between GraphQL types and Go types. This is quite implicit. With the new API this will be much clearer to understand.
I have started working on the new approach and the graphql-go implementation already became a bit less complicated, which I think is good. It will also make it easier to add some features requested in other issues. Please take a look at how the new API currently looks like, I have iterated on it a bit: https://github.com/neelance/graphql-go/blob/new-resolvers/example/starwars/starwars.go#L289
I hear your concerns, but right now this still looks like the right way to go.
Fair enough :) Actually I would take performance improvements over almost anything. I would be wary of promising improvements before benchmarks though (unless you have some already?).
Ultimately it's also your library, and you know it better than anyone.
Looking at the last iteration, I think I can live with it. It is a little complex in a way, here are a few questions:
human
, droid
, etc. Is it just to fit the API?Schema()
function returns b.Build(&root{})
. If root
is not really necessary, could this be replaced with b.Build()
(or passed a dummy root object) and instead &root{}
would be build at query time, in the http handler. It could then be used to pass information about the query through resolvers for Queries and Mutations (e.g. logged in user info). Currently one has to use the Context.WithValue
/Context.Value
in a non-typesafe way, but a user-defined struct would be much nicer. Another mechanism would be welcome too, if root doesn't work here.I'll keep following, keep us updated :+1:
Moving from https://github.com/neelance/graphql-go/issues/15#issuecomment-311453283
When do you think that this'll be done with?
Also, can you use tags/releases so we can use a dependency management tool to manage the versioning as you develop more and change the API?
@neelance what's the decision on this? Is the new way final and just blocked on you finishing the work?
I like the approach it definitively would make sense to go in that direction. It would take a good chunk of my boiler-plate code away
Also thanks @neelance for https://golang.org/ref/spec#Method_expressions
I found time to work on graphql-go
today and made good progress on the new resolvers. My plan is to push a version tomorrow that is not fully finished, but ready for testing and feedback.
If you have a new release I'll test it out this week. Is there going to be a release tag or other versioning system?
On Jul 3, 2017 8:58 PM, "Richard Musiol" notifications@github.com wrote:
I found time to work on graphql-go today and made good progress on the new resolvers. My plan is to push a version tomorrow that is not fully finished, but ready for testing and feedback.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/neelance/graphql-go/issues/88#issuecomment-312756479, or mute the thread https://github.com/notifications/unsubscribe-auth/AA8asgmyzigYoXOaww9OKsfNkXlkmkACks5sKY49gaJpZM4NjiUs .
All right, the new version on branch new-resolvers
passes all tests. Feel free to give it a try. It still needs better error handling in a lot of places and code cleanup. I'm also not really happy about the top-level value (the argument passed to Build
). One option would be to remove it entirely. Ideas on this and other feedback are welcome.
@arnos I'm still reluctant to put a version label on it, since the API does not yet feel stabilized to me. I'm still exploring the GraphQL world.
That tells me that this isn't really production ready :/
@ijsnow Depends on your definition of "production ready": If you mean "use it and expect bug fixes without occasionally having to invest work to adopt some API changes" then no, it is not this "production ready". If you mean "it does not have bugs all over the place", then yes, it is this "production ready".
I just tested the new version.
It seems to weird in some places compared to the old style: b.Resolvers("Query", (*root)(nil), map[string]interface{}{
But not having a func
for every field which is just a getter is a lot better! :+1:
@metalmatze Thanks for giving it a try. Any ideas on how I could improve the new API? It also feels a bit weird to me, but maybe we're just not used to it.
@neelance tried it out some observations:
SB.Resolvers(
"Queries", (*Resolver)(nil), map[string]interface{}{
"flavor": (*Resolver).Flavor,
"flavors": (*Resolver).Flavors,
},
)
SB.Resolvers(
"Flavor", (*FlavorResolver)(nil), map[string]interface{}{
"id": "Id",
"name": "Name" ,
"createdAt": (*FlavorResolver).CreatedAt,
"updatedAt": (*FlavorResolver).UpdatedAt,
},
)
In terms of changes
a dead simple change is to extract the map, resolver type and not have it in the Resolvers method, this will make the API cleaner (IMHO) compare my query resolver example above with the following
queryMap := map[string]interface{}{
"flavor": (*root).Flavor,
"flavors": (*root).Flavors,
}
rootResolver := (*root)(nil)
SB.Resolvers("Queries", rootResolver, queryMap,)
SB.Resolvers("Flavor", flavorResolver, flavorMap)
Overall I like the change it brings some duplication for me with the clean architecture
If you did make a version tag for the old API before you merge to master, it'd be really helpful for those of us using Glide and the like. We'd have better control of when we port our code to this new way of implementing resolvers.
Also what about trying to get complete feature support before changing the API? Like subscriptions
@neelance Do you think that the package is likely to go with this new way?
I browsed through the code a bit today. I see a lot of nice changes that happened behind the scenes where things seem more elegant.
But from the perspective of someone who would use the exterior API and not a maintainer, I don't see much benefit.
In fact, I would argue that the old way of doing things had less magic and was closer to the spirit of go, where we were relying on the type system explicitly (behind the scenes you still rely on this through reflection, but now the user only sees strings).
FWIW The new API does NOT seem easier to learn, it doesn't seem any harder either, but I can't see any objective argument by which the new way would be easier.
If you wanted to, I think you could probably keep a lot of the behind the scene changes while leaving the old type based API alone.
@neelance what did you set out to do with this refactor? I see you mention ResolverField
but I see no such class/function in the API, do you mean the fact that for simple fields you can just use the string as a shortcut for a function that returns the type?
Is there a concrete example which you feel the new API simplifies, a code snippet would be great.
Ok with go 1.9 alias types I believe we have an even more interesting use case for the new resolvers structure. I have some time this week and beginning September to re-factor.
If I'm right I'll eliminate the bulk of boiler plate code
@neelance any word on the progress of this?
I completely agree with some of the earlier comments:
But from the perspective of someone who would use the exterior API and not a maintainer, I don't see much benefit... In fact, I would argue that the old way of doing things had less magic and was closer to the spirit of go...
The lack of "magic" of the current resolver system I think makes things clear to understand.
It would be neat if an approach could be taken similar to what grpc does (where they take a proto file and generate interfaces) where we could take a graphql file and generate interfaces that can make it even more go-like in that the provided resolver types must implement the given interfaces.
To me, using something like interface{}
everywhere (like in the proposed new version) is a pretty strong anti-pattern given the simplicity of code generation that is available now.
Just my 2 cents.
Any update on this?
Hi everyone. I'm sorry that there currently is no progress on this library. This is because I am focused on bringing WebAssembly to the Go compiler, see https://github.com/golang/go/issues/18892#issuecomment-347057409.
Regarding the new way to declare resolvers: I liked the approach mentioned above better than the previous one, but I still wasn't fully happy with it. I did some more sketches and what I ended up was this:
func resolveHuman(req *HumanRequest, h *store.Human) *HumanResponse {
var resp HumanResponse
resp.ID = h.ID
resp.Name = h.Name
resp.AppearsIn = h.AppearsIn
resp.Height = make([]float64, len(req.Height))
for i, f := range req.Height {
resp.Height[i] = convertLength(h.Height, f.Args.Unit)
}
if req.Mass && h.Mass != 0 {
f := float64(h.Mass)
resp.Mass = &f
}
if req.Friends != nil {
for _, id := range h.Friends {
resp.Friends = append(resp.Friends, resolveCharacter(req.Friends, id))
}
}
if req.Starships != nil {
for _, id := range h.Starships {
resp.Starships = append(resp.Starships, resolveStarship(req.Starships, store.Starships[id]))
}
}
return &resp
}
The idea is that GraphQL handlers should be similar to http.Handler
. The "resolvers" API that I borrowed from graphql-js
now feels like too much magic and indirect control flow and not really fitting for Go.
Right now this is all I can show to you, but I hope to continue working on this project at some point. Until then the project unfortunately is on hold, at least on my end. Can't clone myself. ;-)
That's a really interesting approach. I can't wrap my head around how the selection set would be efficiently executed... what am I missing?
The idea is that GraphQL handlers should be similar to
http.Handler
@neelance, is there a branch that has these changes? Would you be willing to share your progress in a branch? Thank you
@neelance I like the library a lot. TBH I'm not sure I've thought through for feasibility/benefit yet but here's some ideas;
I'm thinking there could be a high-level NewSchema(q, m interface{})
which accepts a pointer entry for query and mutation. From there reflect can be used to inspect the parameters and generate the schema. If people want to see the IDL schema GenIDL()
.
There are two potential downsides I see to this;
a. It might be difficult to provide some of the meta-data specifically descriptions for queries and modifiers. b. I think there's some benefit in that the IDL encourages you to "design" before you code. By generating it you lose that.
One benefit that I see with the current function per resolver field is that you can easily introduce a fine-grained authorisation layer. So if "role X" isn't allowed to see "field Y" you can do that with an injected function/struct.
Something like this that provides context;
type Resolver interface {
Resolve(sc SchemaContext)
}
Where I reference SchemaContext
in the above I mean more like Apache Beams ProcessContext
rather than the typical Go Context
.
I like the Handler interface! 👍
I've been working on a new GraphQL server package and a large part of the reason why is that I'm unhappy with the interface of this package. I'm now re-thinking this and wondering if I can merge some of my work into this package, as it seems like it's emerging as the GraphQL package of choice.
My thinking about resolvers mostly rests on these principles:
Currently I'm working with this interface:
// Resolver represents a GraphQL object type. It allows the query executor to resolve fields.
type Resolver interface {
// Resolve returns a value for a field, according to the given field name and args.
// If the field is itself a value with its own fields, another Resolver must be returned.
// If the field is a list, a ListIterator should be returned.
// Resolve is assumed to be a blocking function by default, but it may return a chan interface{}
// if the field is being resolved asynchronously. The field value must be sent on the channel and
// is treated the same way as if it had been returned directly from Resolve.
// If the field name is not recognized by the Resolver, then Resolve's behavior is unspecified
// and it may panic.
Resolve(ctx context.Context, field string, args map[string]interface{}) (interface{}, error)
// TypeName returns the name of the GraphQL type that the Resolver is resolving fields for.
TypeName() string
}
I'm still not totally happy with this. For one thing, this Resolver
differs from http.Handler
in that Resolver
objects tend to be constructed dynamically, during the GraphQL query, rather than ahead of time. (http.Handler
objects can be constructed during requests too, but typically they're not.)
The reason is that there's no place to pass things like data loaders or other state. It wouldn't be hard to fix this by adding another interface{}
parameter or similar, but that doesn't feel very elegant or type safe.
This interface does enable some pretty nice things. For example, I've got a helper function that can create a Resolver
from a struct that has graphql
tags on its struct fields:
// NewStructResolver returns a Resolver that resolves fields from a struct's fields.
// s must be a struct or a pointer to a struct.
// The returned Resolver resolves only fields that have the graphql tag. For example:
//
// type Object struct {
// StringField string `graphql:"string_field"`
// }
func NewStructResolver(typeName string, s interface{}) (Resolver, error)
So now servers have the option of doing reflection magic to implement their resolvers, but aren't forced to. And the resolvers can compose nicely, so you can use a struct-based Resolver
for most of your object fields, but have custom logic for the remaining ones.
I'm really interested in trying to engage with the community to solve this problem and working on this. Let me know what you think.
FWIW, my workplace has used the https://github.com/Applifier/graphql-codegen to stamp out all our resolvers for use with this library.
It works, but we're probably about to fork it to make some enhancements to the way it does types, and here's why: right now, that codegen emits interface{}
wildcard type fields in the Go structures it generates whenever there's a Union declaration in your graphql schema. And that's kind of unfortunate.
It looks to me like this graphql-go server library can handle interfaces just fine there, so, we should be able to make an interface (just with a marker method; in practice our code will still do type casing, because there's not much for real semantic interfaces here) to match each Union in our graphql schema. This would make a lot of our nearby code quality higher; interface{}
is the least useful thing for getting the compiler to give us any helpful checks at all.
This is orthogonal to all the other concerns about e.g. whether or not every resolver deserves a goroutine, etc (and for a datapoint, yes, my group too is almost entirely full of codegen'd resolvers which are just dereferencing struct fields).
Hopefully it's not too late to contribute.
This discussion reminds me of Cheng Lou's talk on the Spectrum of Abstraction (start ~17:28). It comes down to how well defined you think Resolvers are for users of your library. The more well-defined it is, the more you move down the ladder of abstraction, say from functions to structs, and get some perf wins. In the link, Cheng Lou gives his understanding of a few libraries, the decisions they made, and how its affect their usage.
Personally, I don't think resolvers are well-defined yet, but this may be because I'm a mediocre programmer without the experience to reliably predict how I will use GraphQL resolvers. Right now, I'm using them in two ways:
func (r *FooResolver) Bar() (string, error) { return r.v.ViewFoo(r.ctx, r.db) }
I think the first will reliably be a best practice, but I'm quite unsure. But I know I'll be prototyping using the second way for awhile, so a certain level expressiveness seems necessary. I'm not sure a few perf wins and key strokes/codegen saved are worth adding to the API size.
i'm specifically looking for the ability to define resolvers at runtime (without using reflection & compiled-in types). i've started working on porting this library to support that use case. however, if there is a branch actively being developed i'd be happy to contribute there to make this a reality.
can anyone point me in the right direction for this?
There's an alternative project out there which has a very different
approach and generates type Resolver interface {...}
in somewhat the way
described:
https://github.com/vektah/gqlgen/
It more or less forked the GQL parsing logic from this library, but then
started over on the resolvers. The way it ended up, you can see all the
methods you need to implement on the generated Resolvers
interface with
this approach (and the compiler can too... which is... VERY nice).
I'm not really one to say anything is ever "perfect", but more interfaces -> more compile time checks -> more happy.
(I know I also said something about https://github.com/Applifier/ graphql-codegen earlier in this same thread. Yeah, I abandoned that. My whole team went over to this vektah library and we're much happier there.)
On Tue, May 22, 2018 at 11:15 PM, Scott Weiss notifications@github.com wrote:
i'm specifically looking for the ability to define resolvers at runtime (without using reflection & compiled-in types). i've started working on porting this library to support that use case. however, if there is a branch actively being developed i'd be happy to contribute there to make this a reality.
can anyone point me in the right direction for this?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/graph-gophers/graphql-go/issues/88#issuecomment-391143320, or mute the thread https://github.com/notifications/unsubscribe-auth/Afho_3ldaA59G0grwcf1Y-wpHOBYJiuCks5t1IAAgaJpZM4NjiUs .
@eric-am I've been using it as well. I've taken the schema and query parsing logic and basically made a custom "ExecutableSchema" that allows me to register / reregister new schemas (and their resolvers) at runtime
I'm not sure what the final approach with this feature will be, but just thought I'd throw my opinion in.
I don't mind the current approach to defining resolvers but like this issue, I do have a problem with basic accessor methods running as resolvers. I also have a concern with just passing through a struct to fetch the fields from. For example, if I want to implement a permissions model which should be used to determine if a client can read a particular field having a method here is key. So I propose that we have "opt in" accessor methods.
For example:
The following would be an accessor:
func (u *UserResolver) FirstName() *string {
return &u.User.FirstName
}
While the following operates as a resolver:
func (u *UserResolver) FirstName() (*string, error) {
return &u.User.FirstName, nil
}
Note returning an error.
This allows me to do something like:
func (u *UserResolver) FirstName() *string {
if u.currentUser.hasPermissions("read_first_name") {
return &u.User.FirstName
}
return nil
}
Without having to worry about the number of resolvers running in parallel.
I'm also open to different method signatures, but I'd prefer to not do away with accessor methods in favour of just using struct fields. Not sure if this is entierly possible since I haven't dug too deep into the library.
It would be nice if you could define multiple field resolvers together to make it easier to keep track of the fields defined for your resolver
// graphql.FieldMap = map[string]string s.ResolveFields("Droid", graphql.FieldMap{ "id": "ID", "name": "Name", "appearsIn": "AppearsIn", "primaryFunction": "PrimaryFunction", })
or use struct tags
type droid struct { ID graphql.ID `gql:"id"` Name string `gql:"name"` Friends []graphql.ID AppearsIn []string `gql:"appersIn"` PrimaryFunction string `gql:"primaryFunction"` }
If the struct tag route is up for discussion it wold be nice to just reuse json tags too if possible
Thanks for all the great work!
Hey I just came upon this issue. I have mobile apps deployed in the production. we had to change the name of the field for an update. now previous users are going to face the crash due to this change.
they are going to call the account_id and now the field is changed to account_sso_id.
"account_sso_id": &graphql.Field{ Type: graphql.String, },
I've just been to the GraphQL Europe conference and it got me thinking about how we specify resolvers. Using Go methods seemed nice at first, but is that really the best way? Please take a look at this alternative: https://github.com/neelance/graphql-go/blob/new-resolvers/example/starwars/starwars.go#L291
Advantages:
ResolverField
Disadvanatages:
interface{}
)What do you think?
/cc @F21 @nicksrandall @acornejo @bsr203 @EmperorEarth