Open alexgenco opened 5 years ago
Question on $commentIds
: if the user was null, would $commentIds be null or the empty list? (Trying to figure out if it’s a non-nullable list or not.)
Question on variable equivalence: can exported values be used in directives. (I suggest “no”.)
The query version seems more complex (because of parallelism) and less necessary (because of graph traversal); do you need that or would a mutation-only version solve your needs?
if the user was null, would $commentIds be null or the empty list?
This is a good question... Does anyone know what happens in Sangria in this scenario? (first example here).
Maybe we could add another parameter to the directive to define how to handle that case, e.g. @export(into: "commentIds", default: NULL)
or @export(into: "commentIds", default: EMPTY_LIST)
. Or if we didn't want to clutter the enum namespace with NULL
and EMPTY_LIST
, maybe we could do @export(into: "commentIds", null: true)
to indicate it starts as null
. Open to suggestions here.
can exported values be used in directives
I can't really think of a use case for that, but since we're just dealing with scalar values I don't really see why not? Also I assume we're talking about query directives as opposed to schema directives. The latter definitely wouldn't work.
The query version seems more complex (because of parallelism) and less necessary (because of graph traversal); do you need that or would a mutation-only version solve your needs?
As you can tell by my extremely contrived query example, we don't actually have any use cases for queries at the moment. But while a mutation-only version would probably solve all my company's current needs, I think the general principle is equally applicable to queries.
since we're just dealing with scalar values I don't really see why not?
I'm thinking that the @skip
directive, for instance, would result in a different response shape being sent depending on the exported value which the client cannot know at request time. This seems like it could open a number of issues for client libraries, particularly strongly typed ones. (And yes, query directives :) )
while a mutation-only version would probably solve all my company's current needs, I think the general principle is equally applicable to queries
Since there's no use case for it right now, and it complicates the proposal, I suggest you remove it and make it mutation only for now. We can always expand it to queries later should there be sufficient need for it.
Parallel execution strategies will probably have to establish execution ordering between fields, such that a field using an exported variable from another field would wait for the other field to finish before resolving.
In mutation, only top-level is executed in sequence so even if we restrict this feature to mutation it still possible to break execution order by using exported variables as an argument to the field inside mutation response:
mutation {
foo {
id @export(as: "id")
bar(id: $id)
}
}
So I strongly oppose this proposal since it would significantly complicate the execution algorithm and affect performance even for clients that wouldn't use this feature.
Moreover, I don't understand why you can't use a combination of query batching and @export
:
[
{
query: `
mutation ($input: TokenizeCreditCardInput!) {
tokenizeCreditCard(input: $input) {
paymentMethod {
id @export(as: "id")
}
}
}
`,
variables: { input: "..." }
},
{
query: `
mutation ($id: ID, $transaction: TransactionInput!) {
chargePaymentMethod(input: { id: $id, transaction: $transaction }) {
transaction {
id
status
}
}
}
`,
variables: { transaction: "..." }
},
]
It's fully spec compliant and moreover doesn't require any change to core graphql libraries like sangria
, graphql-js
, etc.
I also think this solution perfectly fits the "Simplicity and consistency over expressiveness and terseness" principle:
https://github.com/graphql/graphql-spec/blob/master/CONTRIBUTING.md#guiding-principles
There are plenty of behaviors and patterns found in other languages intentionally absent from GraphQL. "Possible but awkward" is often favored over more complex alternatives. Simplicity (e.g. fewer concepts) is more important than expressing more sophisticated ideas or writing less.
@alexgenco @benjie What do you think?
This seems like it could open a number of issues for client libraries, particularly strongly typed ones.
What kinds of issues are you thinking? Passing exported variables into @skip
still seems fairly intuitive to me... But I've also never used it client-side, so I'm not clear on its implications for strongly typed clients.
I think my goal here is to keep this general and applicable to any scalar value no matter where it is. IMO excluding it from certain places like mutations and directives complicates the concept of the directive, and places too much of a burden on documentation to explain why it only applies in certain places. GraphQL implementations can always roll it out piece by piece if they choose.
Which brings me to @IvanGoncharov 's very good point, that making it mutation-only doesn't actually avoid the async field dependency issues. I don't think there's any way around that.
it would significantly complicate the execution algorithm and affect performance even for clients that wouldn't use this feature.
How would this affect the performance of clients that don't use the feature?
Moreover, I don't understand why you can't use a combination of query batching and
@export
:
graphql-java doesn't support this (but not without trying: https://github.com/graphql-java/graphql-java/pull/810). Also, neither @export
nor multiple operations are in the spec yet. One of the purposes of this RFC is to decouple the two.
How would this affect the performance of clients that don't use the feature?
@alexgenco Before running any resolver we need to check if all of its arguments and directives to see if all variables are already available. And we couldn't detect if the query has dynamic variables until we fully iterate over the entire query.
Also, neither @export nor multiple operations are in the spec yet.
GraphQL Specification deliberately doesn't specify how of request instead it's only specified ExecuteRequest
algorithm:
https://graphql.github.io/graphql-spec/June2018/#sec-Executing-Requests
So nothing preventing you from sending an array of GraphQL requests in a single HTTP request.
Moreover, see how it was done for graphql-java-servlet
which is built on top of graphql-java
: https://github.com/graphql-java-kickstart/graphql-java-servlet/pull/48
@benjie Also implement Apollo style query batching here: https://github.com/graphile/postgraphile/issues/634
And from what I'm seen all production ready GraphQL server implementation already supports Apollo style query batching.
You can implement @export
in the same manner, for example by using graphql-java
instrumentation mechanism: https://www.graphql-java.com/documentation/v12/instrumentation/
BTW, It's completely legal to add your own query directives: https://graphql.github.io/graphql-spec/June2018/#sec-Type-System.Directives
Directives can also be used to annotate the type system definition language as well, which can be a useful tool for supplying additional metadata in order to generate GraphQL execution services, produce client generated runtime code, or many other useful extensions of the GraphQL semantics.
In future, if the custom implementation of @export
directive becomes popular in GraphQL ecosystem we can think about adding it to the spec as one of the standard directives.
Thanks @IvanGoncharov this is very helpful. We will try this approach on our end and see what comes out of it.
@alexgenco Thanks for taking the time to evaluate the alternative proposal and it would be great to get feedback from you.
In addition to the problems stated above, the impossibility of validating this before execution would prevent me from ever implementing this in a server.
mutation mut($varName: String!) {
makeThing {
id @export(as: $varName)
}
doThingWithThing(thingId: $thigId) { # uh... mistake or intentional?
result
}
}
When variables don't need explicit definitions or types, too much validation would need to be skipped for my taste. For the above example, we would need to skip at least 2 of the current validation rules.
I really think a multiple-operation-based solution is the simplest and best way to address the problem here, but for the sake of discussion, consider all of the special rules that this export directive would require of the validator and executor: "'as' argument must be a string literal", "'as' argument must refer to a variable name used elsewhere", "'as' argument cannot be an existing variable name", "export directive cannot be used on object types", etc. At this point it feels like we're shoehorning a major feature somewhere it doesn't really fit, in a backwards-incompatible way, and a language feature starts to make more sense to me:
mutation {
makeThing {
id => $thingId
}
doThingWithThing(thingId: $thingId) {
result
}
}
Of course, that doesn't help with execution order, but the impact on the spec and on server implementations would be much more elegant imo.
Thank
i just wrote a long comment in the original ticket before finding this one, i'm touching on some of the same points discussed here, i hope it adds to the discussion: https://github.com/graphql/graphql-spec/issues/377#issuecomment-575814971
Hey sorry for the radio silence here. We finally got around to the suggestion from @IvanGoncharov and have posted the result on our docs: https://graphql.braintreepayments.com/guides/sequence_requests/. Would be interested to hear y'alls thoughts.
Edit: updated docs are now at https://graphql.braintreepayments.com/guides/request_batching/
This looks very similar to Hot Chocolate's query batching: https://hotchocolate.io/docs/batching
We're discussing this under the GraphQL over HTTP Working Group - your contributions there would be welcome, @alexgenco
Thanks for that, @benjie , I missed that part of the hotchocolate docs, and assumed they had implemented batching only with the multiple-operation approach, which we have been trying to avoid. We're probably going to make a few tweaks to bring ours closer to parity.
Definitely factor in the discussions (in particular I think the operations should declare all variables they use, even the ones that were "exported/imported", and I think there should be a way of dealing with type mismatches (exporting a String but importing a Float), and there's complexity over how to handle values that come from within object lists, and ... probably best to take the discussion to the other thread).
Will do. But we are generally taking a minimalist approach to those types of edge cases, and leaning as heavily on the default type checking as possible.
I created an @export
directive for my GraphQL server, documented here. In this implementation, dynamic variables are still variables, i.e. they must be defined in the query arguments or it throws an error.
To accomplish this, I've had to do a hack:
For the GraphQL server to know if a variable is dynamic or static when parsing the query, then the variable must indicate this somehow through its name. So, I chose to name all dynamic variables starting with _
, such as $_dynamicVariable
and $staticVariable
. This way, if the name of the variable starts with _
, the GraphQL engine doesn’t resolve it when parsing the query, since its value will be added on runtime; otherwise, it is a static (i.e. "normal") variable and dealt as done currently.
If anyone can think of a better way to accomplish this, without a hack of naming static and dynamic variables differently, I'd love to hear about it.
@leoloso after almost a year of using this approach, how do you feel about your design choices?
They work alright, so I'm happy enough with this solution. However, I do wish I didn't have to prepend dynamic variables with _
, because that's clearly a hack, but I see no way around it.
RFC: Dynamic variable declaration
This RFC is an attempt to revive https://github.com/graphql/graphql-spec/issues/377 by decoupling it from multiple operation support, and to flesh out our specific use cases and needs. It uses the same directive name
@export
in examples, however we could also choose a different name if we want to keep them independent features. See "Limitations/considerations" for alternatives.Problem
At Braintree, we've been designing our new GraphQL API with the goal of breaking up our legacy API features into small, focused, and composable queries and mutations. For instance, our legacy API has a
Transaction.sale
operation that accepts either apayment_method_token
, apayment_method_nonce
, or raw payment method details. In addition, you can passstore_in_vault
to indicate that you would like to persist the payment method as well. This proliferation of options leads to ambiguity about things like parameter precedence, and makes it difficult for clients to understand the overall surface area of the endpoint.In order to avoid this scenario in our GraphQL API, we have designed our schema to only expose the basic building blocks of these operations, allowing clients to compose them in whatever way fits their needs. Referring back to the above example, if a client wants to create a transaction from raw credit card details, they would use two GraphQL mutations. First, to tokenize the details:
Then to charge the resulting single-use payment method:
These two mutations are easy to understand, with little ambiguity as to what you are supposed to pass in and what you expect to get in response. However, it requires clients to make two separate requests in order to do something that was previously achievable in one. That means double the amount of time spent waiting on HTTP, and extra logic around error handling, partial failure states, etc. It also undermines one of the main benefits of GraphQL: improving the performance of your clients by combining multiple requests into one.
The goal of the
@export
directive is to allow clients to take thepaymentMethod.id
from the first query and pass it through to thepaymentMethodId
input on the second, all in a single request.Proposal
Happy path
Using the
@export
directive, we expect the above example to turn into something like this:The
as
argument takes aString
and creates a reference to a variable that would be accessible as an argument to other fields.This covers the case where we want to declare a single variable, but it doesn't really make sense when used inside a list type. For that, we would need a way to append results into a list variable. That could be achieved with another argument like
into
:Errors
A failure to resolve an exported field should result in an error on any field that attempts to use it as input. For instance, if the above
tokenizeCreditCard
request failed because of an invalid credit card number, the response could be something like:The error would be similar to what you would get for referencing an undeclared variable as an input parameter, except with messaging that communicates the variable was expected to be exported.
Limitations/considerations
Here are some of the ones I've come up with so far:
@export
declaration.as
implies the exact type of the field,into
implies the exact type wrapped in a list). In my mind we shouldn't need to declare the variable types explicitly, since this inference seems unambiguous. But if I'm wrong about that, or if it somehow eases the directive's implementation, I think we could work out a way of declaring the types as well. Something like@export(as: "userId", type: ID!)
, or similar.@declare
, e.g.@declare(as: "fooId")
,@declare(as: "fooIds", accumulate: true)