Closed Poincare closed 6 years ago
I suspect it would be easier if my app was using the current version of react-apollo which makes it easier to transform data between the GraphQL query and the React props. But of course, it would be even easier if you could just say "any Date is a Date"...
I think having a "standard" set of custom scalars, like Date
, would be a great way to solve this.
Why not provide a way to handle those at network layer level ? I will try something in this domain soon with file uploads, I'll let you know what I find!
We could have some sort of type manager that is in charge of de-serializing scalars on the client, with some handy defaults like Date, while still being extendable.
Hi folks,
I'm working on a patch which adds a CustomScalarManager
class with the following features :
I have a the following questions :
CustomScalarManager
class be 'plugged' with the existing code ? Following @rricard's comment I was thinking of implementing a CustomScalarManager.getAfterWare()
method. And let the user make a networkInterface.useAfter(customScalarManagerAfterWare)
. The afterware would de-serialize scalars. This is not implemented yet in my patch. Am I going in the right direction or am I mistaking somewhere ?I'm trying to gather a few comments in order to make a great patch. Then I'll do a PR and ask for reviews.
If there are other features which look interesting to you for that CustomScalarManager
, just ask and I'll try to include them in my patch.
Regards, Olivier
Hi @oricordeau, I've just come across this issue, and what you're suggesting seems to be what I need given that I've configured Apollo GraphQL Server to support a Date
scalar. Did you make any progress here, or have you discovered some other way of doing this?
I'm interested in this too. For now, I've hacked my way around this limitation by manually converting any field that's supposed to be a date into an actual Date
object in my query containers, but it'd be a lot cleaner if dates could be stored as dates directly in the store.
@stubailo I'm curious to know how did you solved this in Optics when you have to fetch data with dates?
Not sure if this has been mentioned already, but if we want to support custom scalars we would need the GraphQL schema to know which fields correspond to which scalar types. How about adding support like this?
gql`
fragment on User {
joinedAt @transform(${joinedAt => new Date(joinedAt)})
}
`
Now whatever code is using that fragment has access to a transformed scalar and we don’t need to fetch the schema. I think this function-in-directive pattern is a super powerful tool to start looking into. For example we could also use it to allow users to define resolvers for client fields:
gql`
fragment on User {
localSettings @client(${() => { ... }})
}
`
@stubailo I was wondering if the team have any design ideas you could share to tackle this?
@Akryum I'll ask @daniman from the service team what they did and report back to you.
@Akryum so according to Danielle we don't currently do anything special, we just turn ISO date strings into dates inside the component.
I think that's a fine approach, but if you want something fancier you could probably also do it in props
of the graphql HOC.
What do you mean by 'HOC'?
Higher Order Component (i.e. graphql
from react-apollo)
@stubailo Is anything planned to have a nice included solution to automatically cast strings into Dates?
I was thinking, maybe we could do something like this:
const query = gql`
query {
allMessages {
text
created: Date
}
}
`
const scalarResolvers = {
Date: value => new Date(value),
}
That would allow us to pretty easily parse any custom scalar on the client. I think it's a great idea.
Any plans on implementing this, or is there any way to do this already with some middleware or similar?
This issue has been automatically marked as stale becuase it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions to Apollo Client!
Keeping this open.
This issue has been automatically marked as stale becuase it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions to Apollo Client!
Keeping this open.
@clayne11 hmm, adding the feature label should have kept this open! I'll take a look so you don't have to comment 👍
I too would like to have more control over how Apollo deserializes the response from client. It isn't enough to munge the shape of the data in the HOC's props
method. If the date is nested in the response data, then we would have to iterate over the entire response every time. Not just every time it is fetched, either, since as far as I know the props
method is called even if the data was cached.
This issue has been automatically marked as stale becuase it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions to Apollo Client!
I'm gonna keep this open :)
Would be awesome for the reasonml community if we could give a query and a custom parser to the request as that would give us type-safety all the way through.
Now that we have a way to share parts of the schema, maybe there is an option to share the subsets that contains scalar types so we can get the additional information needed client-side to support them.
I too need custom scalars in my apps and I'm a bit surprised to see so low activity on this. From what I understand custom scalars are a core part of graphql and relay already supports them. Given this I would expect this to have a bit higher priority. Is there anything in particular that is holding up implementation of custom scalars?
Related to #2626
Is there any update on this? The Apollo client 2.0 release announcement mentions support for this but it does not seem like it is actually supported yet.
With ApolloClient 2.0, we could build an apollo-link
directive that could be attached to fields, something like:
query myQuery {
stringOrDateInt @normalize(type:"Date")
}
Then you could write custom scalar parsers that you would hook in to the normalizer link?
That defeats the whole point of having a schema with custom scalars. You're duplicating work within every single query. Super error prone and a lot of unnecessary work.
I've been looking into making a link that handles custom scalars. I agree with @clayne11 that using the schema is the best way. The problem is that the schema is not available on the client. However the client does not need the full schema in order to handle custom scalars, it just needs to know which fields are custom scalars. So this could be sovled by either providing this information manually to the link, or using codegen to generate it from the schema.
From thinking about this I've found two ways the missing information could be provided to the client. One way would be to have information per type like this:
{
"Customer" {
created: "DateScalar"
fooField: "FooScalar"
}
"Order" {
orderDate: "DateScalar"
barField: "BarScalar"
}
}
So when the link gets the result from the server it can check the __typename
field of each object, and transform the scalar fields. The downside of this is that it requires the __typename
field to exist.
The other way would be to provide information about the paths that can contain custom scalars.
I've made some experiments with a link that transforms the results and it is quite straightforward. But transforming does need to happen with the query AST that is sent to the server too and that is a bit more complex.
The full schema can definitely be available on the client and in fact it has to be if you use interface
types in order to ensure that Apollo can determine what type of objects are being returned.
I haven't used unions or interfaces yet but according to this the approach seems similar to what I proposed, ie. the relevant parts of the schema are extracted to a separate file and included in the client. Another approach would be to include a full introspection file, or fetch the introspection query at startup. But since the full schema it is not needed for custom scalar support, I think for performance reasons it would bet better to extract the relevant parts of the schema in a format that will be fast to lookup in run-time.
There is no standard in graphql spec afaik and I would not push on it for now. If we want to keep cache and all query/mutation results schema compliant, there is no space in apollo core to do such transformation.
I would suggest to add utility function in graphql-anywhere which would enable people to transform some parts of object based on __typename or whatever they want on each node using transform function. Ofc not mutating the initial object but returning new copy.
And than show in example how to use it to transform query/mutate results with reselect or other memoize library to not kill the performance.
Custo scalars are in the spec, and have been for some time, see for example here:
GraphQL provides a number of built-in scalars, but type systems can add additional scalars with semantic meaning. For example, a GraphQL system could define a scalar called Time...
Custom scalars has been supported on the server side for some time, even in apollo. In the schema definition language you write them as scalar Date
. They are also supported by codegen tools like apollo-codegen (altough they are all declared as any
right now).
Well, but that Date scalar serialisation is still just server side implementation detail with no effect on actual communication protocol between server and client. And there is no way how to negotiate/communicate such thing to client side. Date
is just custom name which can be said that it would be always some ISO, but there is no agreed way how to communicate to client which ISO.
There is no standard directive and if we put some custom directive with exact code how to transform it in JS, it would not work in other languages. Same with any additional annotation which would be just explained in documentation.
So there is no spec for client side custom scalar transformation. That means correct discussion channel would be probably in graphql spec first and than in Apollo realm.
Still I think that ~99% of user scenarios can be resolved by additional transformation and memoization outside of client core and outside of cache. Till there will be agreed standard.
I think they keyword is "custom" here. If the exact implementation of the scalars were in the spec, such as format for serialization etc., they wouldn't be "custom". So AFAICS the spec is complete as far as custom scalars goes, and the fact that they are custom means that it is up to the application to decide format for them. For example one application may chose to use numbers to represent the custom scalar "Foo", while another application may choose strings to represent the custom scalar "Foo". So I think this part is a contract between the application and it's clients and not related to the spec. As @clayne11 noted above, directives is not a good way to handle custom scalars so I would say directives are not related to this discussion either.
What I think is needed is some utilities and API's in for example apollo-client that application developers can use to make implementing custom scalars easier on the client side.
Here is a more elaborate example of how I think it could work.
Imagine we have this schema:
scalar Date
scalar Position
type Customer {
id: ID!
firstName: String!
lastName: String!
created: Date!
position: Position!
}
type Query {
customers: [Customer!]
}
schema {
query: Query
}
Then we have a link apollo-link-custom-scalar
to handle custom scalars. We bootstrap that link with:
scalarSchemaExtract
).scalarResolvers
). These functions may be shared with the server if it is JS, otherwise the server needs the same implementations in it's langauge.const scalarSchemaExtract = {
Customer {
created: "Date"
position: "Position"
}
}
const scalarResolvers = {
Date: {
parseValue(value) {
return new Date(value);
}
serialize(value) {
return value.getTime();
}
},
Position: {
parseValue(value) {
const parts = split(value, ";");
return {x: parts[0], y: parts[1]};
}
serialize(value) {
return `${value.x};${value.x};`
}
}
}
const customScalarLink = new CustomScalarLink(scalarSchemaExtract, scalarResolvers);
const link = ApolloLink.from([customScalarLink, new HttpLink()]);
const client = new ApolloClient({
link: link,
cache: new InMemoryCache()
});
So the idea is to provide the link with minimal information and then have it handle the scalars. When data arrives from the server, the link could check the __typename
field and look it up in scalarSchemaExtract
to see if it contains any scalars. Then lookup the function to parse the value in scalarResolvers
. When the client sends an operation to the server the reverse needs to somehow happen but I'm not sure there is a __typename
field to check in this case?
The Date
and Position
scalars are only examples, this API should be able to support any custom scalar. My app for example has a scalar that represents a filter described in a custom filtering language. I would like to store the parsed AST of the custom language rather than the unparsed string. This way I would not have to pay the cost of parsing the filter each time I get it from the store.
Yes @jonaskello is absolutely right. This has nothing to do with the graphql Spec. Just as you can declare custom query resolvers for the cache, Apollo should provide a way to declare custom transformations for certain scalar types, like dates which the majority of graphql APIs probably need. As the Spec states that this is not the responsibility of the graphql protocol, it clearly is the responsibility of the client. Ideally one could declare transformations for certain types in the Schema (if these are known to the Apollo client already), or alternatively at least for custom paths in a query.
The current situation is inadequate especially because the component gets the responsibility to resolve custom types from the Schema, this is obviously error prone and can also quite complex in deeply nested data structures (which are arguably one of graphQLs strong suits) because it leads to complicated deeply nested pop transforms like this (still a relatively simple example from my current app:
graphql(MovieQuery, {
props: ({ data: { movie }, ...props }) => ({
...props,
movie: movie && {
...movie,
showtimes: showtimes && showtimes.map(({datetime, ...showtime}) => ({
...showtime,
datetime: datetime && new Date(datetime)
}))
}
)}
})
To make it more complete something like this.
Not sure what exactly gql
does and if it would not be better for parsing with already prepared tools.
Also not sure what part of schema we need to include, if we can identify input without actually knowing the mutation/query mapping
const scalarSchemaExtract = {
type: {
Customer: {
created: "Date",
position: "Position"
}
},
input: {
CustomerInput: {
created: "Date",
position: "Position"
}
}
}
Or
const scalarSchemaExtract = `
type Customer {
created: Date
position: Position
}
input CustomerInput {
created: Date
position: Position
}
type Query {
currentPosition: Position
}
type Mutation {
setCustomer(customer: CustomerInput!): Customer
}
schema {
query: Query
mutation: Mutation
}
`
Don`t have seen that there are no custom scalars yet. This is really a must have!
Could this be possible using a link that requests the type using __type for all types it receives and if it has a date field, it parses it? This is the best solution I can think of without making any changes to the libraries involved
Looks like a pretty serious issue. Does it mean we cannot use scalar types functionality of the grahpql at all? Since we cannot use it on the frontend, we cannot use it on the backend in 99% cases since all what graphql is a view layer, and what we have on the backend must be on the frontend as well)
any updates on this? Custom resolvers/transformers on the client-side is a must have.
Now, I tend to store some serialized 'objects' (not as in Java/c++ - style objects) as strings in my graphql db. All I need to be able to do is 'parse' all strings. Any string without tagged elements, just returns the plain string, tagged elements in the string are transformed (resolved).
What I'd like to do is: apply the parse
function to all Strings in a query result.
Just finished throwing together a primitive deserializer using io-ts while referencing James's post:
import { HttpLink } from 'apollo-link-http';
import { ApolloLink, Operation, NextLink } from 'apollo-link';
import * as t from 'io-ts'
import { failure } from 'io-ts/lib/PathReporter'
// represents a Date from an ISO string
const Datetime = new t.Type<Date, string>(
'Datetime',
(m): m is Date => m instanceof Date,
(m, c) =>
t.string.validate(m, c).chain(s => {
const d = new Date(s)
return isNaN(d.getTime()) ? t.failure(s, c) : t.success(d)
}),
a => a.toISOString()
)
const Schedule = t.type({
id: t.string,
createdAt: Datetime,
updatedAt: Datetime,
date: Datetime,
title: t.string,
details: t.union([t.string, t.null]),
})
let operationDeserializers = {
GetSchedule: t.type({ schedule: Schedule })
}
const scalarLink = new ApolloLink((operation: Operation, forward: NextLink) =>
forward(operation).map(({ data, ...response }) =>
({
...response,
data: operationDeserializers[operation.operationName]
.decode(data)
.getOrElseL(errors => {
throw new Error(failure(errors).join('\n'))
})
})
)
)
const link = ApolloLink.from([
scalarLink,
new HttpLink({ uri: 'http://localhost:5000/graphql' })
]);
I'm not sure if it handles errors correctly, or if it's positioned to take full advantage of the caching layer, or if I'm misusing the observable api somehow... but it gets the properly typed props to the component, so it seems like a decent start.
Serialization seems like it will be more difficult. For vanilla js users also looking to roll their own, I recommend gcanti's similar project, tcomb.
Long tail thoughts: eventually, having an apollo-codegen [--passthrough-custom-scalars]
extension using io-ts-codegen could let us generate the runtime types, as well as a CustomScalarLink
that accepts a Record<string, RuntimeType>
and handles (de)serialization.
For now I think we manually have to serialize and deserialize our scalars. My proposal:
Make ApolloClient accept a hydrator option
hydrator: new ApolloClientHydrator({
scalars: { Date: DateScalar }
});
Whenever we receive data, based on the introspection of types, we can map reduce it based on the scalars we've got registered. And as well, when we're performing mutations or passing arguments of a query serialisation should be done.
I went ahead and wrote some graphql-to-io-ts templates for graphql-code-generator
. I haven't had time to put it through the ringer quite yet, and it's currently depending on my fork pending a pr, but it seems like a promising approach.
If I understand ApolloLink
correctly, a more robust version of what I had above would be all that's needed for a HydrationLink
(probably semantically than CustomScalarLink
because deserialization could involve higher-level actions like sorting).
After speaking with @glasser, I noticed that we don't currently have great support for custom scalar types on Apollo Client. I understand that these are probably pretty difficult to support without having the schema on the client but we could make it possible to add some custom support for scalar serialization on the client.
Although custom scalar types are certainly important for lots of applications, considering some of the other features and fixes we have to build, this is probably not hugely important at the moment. (I could definitely be wrong on this.)