hasura / graphql-engine

Blazing fast, instant realtime GraphQL APIs on your DB with fine grained access control, also trigger webhooks on database events.
https://hasura.io
Apache License 2.0
31.2k stars 2.78k forks source link

Remote schema type merging #2494

Closed michaelhayman closed 4 years ago

michaelhayman commented 5 years ago

Hi!

I need to be able to return hasura types from my remote schema, so that the apollo client can automatically refresh its cache without having to do a second graphql call (one of the main advantages of apollo client!)

I'm building my remote schema with Apollo like this:

  const localExecutableSchema = makeExecutableSchema({
    typeDefs: localSchema
  })

  const hasuraTypeDefs = fs.readFileSync(path.join(__dirname, "schema.graphql"), "utf8")

  const remoteExecutableSchema = makeExecutableSchema({
    typeDefs: hasuraTypeDefs
  })

  const newSchema = mergeSchemas({
    schemas: [
      localExecutableSchema,
      remoteExecutableSchema,
    ],
    resolvers: resolvers
  })

I downloaded the types from Hasura like this:

gq http://localhost:8080/v1/graphql -H 'X-Hasura-Admin-Secret: *********' --introspect > schema.graphql


It's definitely loading all the types into apollo-server 🎉, but then I get this error from Hasura. What's happening here? Is there some way to disable the aggregate fields other than editing the file? Or some way to make apollo-server/Hasura deal with these types?

graphql-engine_1  | {"timestamp":"2019-07-09T12:30:50.027+0000","level":"warn",
"type":"metadata","detail":{"message":"Inconsistent Metadata!",
"info":{"objects":[{"definition":{"definition":{"url":null,"headers":[],"url_from_env":"NODE_SCHEMA_URL","forward_client_headers":true},"name":"node-server","comment":null},
"reason":"types: [ 
profile_responses_aggregate_fields, 
slides_aggregate_fields, 
user_notification_preferences_aggregate_fields, 
users_public_aggregate_fields, 
..., <and so on, all "aggregate" fields for every table> ] 
have mismatch with current graphql schema. HINT: Types must be same.","type":"remote_schema"}]}}}
lexi-lambda commented 5 years ago

To clarify: are you running apollo-server behind Hasura, or are you running Hasura behind apollo-server? That is, is your client connecting to Hasura or to apollo-server?

Either way, set up the remote schema in Hasura or apollo-server, but not both. Doing the merging on both ends means you’re creating a schema cycle, and both servers will end up trying to re-merge the merged schema (which already contains their local schema) with their local schema. If either schema changes, the schema will become inconsistent with itself because it’s trying to merge the old schema with the new one.

michaelhayman commented 5 years ago

It's the former, remote schema behind hasura.

How do I return a hasura type to the client then on a custom resolver? Any resolver I define in my remote schema has to return some kind of type. And without returning the exact type that hasura would return, no caching happens.

E.g. a resolver for updating a user's account with special logic (verifying a SMS token, for example) which can't be handled purely by a hasura update. This should return the user record as defined by the user type, but as that type is defined in hasura, I can't (as far as I know). So the client has to again manually refetch that data from hasura to update its cache, partially defeating the purpose of using apollo-client in the first place...

TLDR Currently I'm returning booleans from apollo resolvers when I should be returning records, because I lack the hasura-defined types in my remote schema.

michaelhayman commented 5 years ago

I had talked about this in the hasura channel on discord a few months ago, and there the team (or the honorary members) suggested I do this. It's just falling over on the aggregate piece, hopefully there's some way around that!

lexi-lambda commented 5 years ago

If you want to share types between your remote server and Hasura, you do, indeed, need to duplicate those types in your remote schema. I would just recommend doing it manually, and only for the types you need, to minimize type inconsistency errors. As long as the types in your remote schema are identical to the types generated by Hasura, it should work okay.

It’s a bit of boilerplate to maintain the duplicate types, but it’s not necessarily bad boilerplate, since if you only copy the types you need, then a schema inconsistency error is more informative: it probably means you have to update some logic in your remote schema to accommodate the change. If you just copy all the types over, you’ll likely get a lot of spurious inconsistency errors that your remote schema doesn’t care about, so the consistency checker won’t be as useful to you.

That said, it’s also possible that there’s something wrong with the way Hasura checks consistency of the *_aggregate_fields types, so maybe there’s a bug in there, too. Does the issue happen for any Postgres schema?

tirumaraiselvan commented 5 years ago

Yeah, it is very interesting that the error reports that there is a mismatch between only the aggregate_fields. Ideally, there shouldn't be any problems if the types are exactly the same.

ecthiender commented 5 years ago

@michaelhayman if you're using Hasura to merge your remote schema, I am not sure then why would you write mergeSchemas code in apollo-server? Can you clarify/elaborate a bit more about your setup?

Also, pasting the relevant portions of your localSchema would help. It would be awesome if you can give us a heroku app link with the minimal schema required (and the problematic remote schema added) to reproduce the issue. That would help us debug faster.

coco98 commented 5 years ago

@michaelhayman

I think this is something we should be able to solve with remote joins (that allow relationships with remote schemas):

  1. Have "independent" custom resolvers / types in your remote schema
  2. The return types (queries, mutations) can contain a field that can reference a Hasura type. Say, user_id is a field in the verify_sms_token mutation. user_id is a also a unique id for an entry in the user table in Postgres.
  3. You create a relationship from user_id to the user table in Hasura (in the remote schema configuration at Hasura)
  4. Frontend clients can traverse the user model (and anything else that the user is related to) in their GraphQL queries, even though the remote schema doesn't know how to fetch user data or whatever else that user is related to.

The current remote joins feature supports database to remote schema relationships so you can do this:

query {
   user_in_postgres {
     id
     name
     remote_schema {
       more_info
     }
   }
}

But once we also support remote schema to postgres relationships, you can do this:

query {
  remote_schema {
    user_id
    more_info
    user_in_postgres {
      id
      name
    }
  }
}

Your custom resolvers and types can return completely independent types, but you can still get the benefit of the graph and the resolver logic in a different part of the stack via a relationship that is resolved by Hasura.

Do let me know if that makes sense and if that would solve the problem?

michaelhayman commented 5 years ago

So from my remote schema I return an ID, and then I can use that to do a join in Hasura via remote schemas.

I think this would work perfectly, if it also works for mutations :)

michaelhayman commented 5 years ago

Just to follow up, will this support mutations? That's the use case - I update a record(s) or do some action via a resolver, and want that resolver to return the updated record in the exact same way as it would if I queried it; that way the client can automatically update its cache :)

Currently I just return 'true' so Apollo client has to do an additional query to update its cache.

Thanks!

coco98 commented 5 years ago

@michaelhayman Yep, that's exactly the idea!

The process of doing it is that: your "resolver" basically returns a user_id, or product_id or something like that and hasura will expose that as the full type with its relationships etc. That way, the client can control what specific fields of the updated object/objects it needs in the mutation response for the client cache update.

shanecontinued commented 5 years ago

@coco98 when will remote joins be released? I saw the announcement several months ago.

0xGosu commented 5 years ago

+1 for this feature to be implemented soon. Also can you guys please allowing this: Nodes from different GraphQL servers cannot be used in the same query/mutation

marionschleifer commented 4 years ago

This issue will be solved by remote joins which will be released in the next few weeks. Closing this issue.

tejasmanohar commented 4 years ago

Is there an update on when remote joins will be released? I also want to be able to "return rich hasura-generated types" from my custom mutation

tirumaraiselvan commented 4 years ago

@tejasmanohar The new Actions feature allows you to create custom mutations and connect it with the rest of the graph. See https://hasura.io/docs/1.0/graphql/manual/actions/action-connect.html

peitalin commented 4 years ago

This issue will be solved by remote joins which will be released in the next few weeks. Closing this issue.

@tirumaraiselvan @marionschleifer @lexi-lambda

I don't think this thread answers the original question. @michaelhayman asks why the generated graphql schema from:

gq http://localhost:8080/v1/graphql -H 'X-Hasura-Admin-Secret: *********' --introspect > schema.graphql

Cannot be merged into the Hasura graphql schema, and spits out a "Types must be same" error. The graphql introspection should generate identical schemas no? So remote schema stitching should work and this is clearly a bug.

If this is not possible, teams have to maintain two sets of graphql types that are identical in all ways, except for the __typename....Products and products. This also breaks Apollo caching and basically makes Hasura a non-option for teams thinking of migrating their existing Graphql infrastructure over to Hasura.

Actions and Remote joins do not really solve the issue if they have semi-complex mutations that require libraries, etc. as that would require rewrites of up-stream services.

tirumaraiselvan commented 4 years ago

Hi @peitalin

Certainly you can reuse the exact same types in your remote schema and Hasura will not complain. You do not need to define Products and products, just one of them will do.

The issue is how do you automate this and the answer is it is not easy because of the cyclical dependency. The remote schema supposedly adds new types and fields into Hasura, so Hasura has those types and fields too. But the remote schema itself uses the introspected hasura schema so it's not clear what will happen during runtime.

Now, in your particular example: you are copying the types semi-automatically by generating a static schema.graphql file (prior to adding any remote schemas). You are then loading this into your remote schema. This should ideally work but because the schema generated by hasura is fairly complex, there could be issues in compatibility checks. If this is the pattern you want to pursue then pls do create a new issue since this one has a mix of discussions.

ZelimDamian commented 3 years ago

Hi @peitalin

Certainly you can reuse the exact same types in your remote schema and Hasura will not complain. You do not need to define Products and products, just one of them will do.

Hi @tirumaraiselvan, based on this reply I would assume that acquiring Hasura's schema through introspection and using the types from there to expose some custom queries and mutations should work. But it doesn't for me and apparently the OP as both get the same error message. And btw I'm getting the error not just for the aggregate types but also "data" types like users/accounts/etc.

The issue is how do you automate this and the answer is it is not easy because of the cyclical dependency. The remote schema supposedly adds new types and fields into Hasura, so Hasura has those types and fields too. But the remote schema itself uses the introspected Hasura schema so it's not clear what will happen during runtime.

I'm also experiencing this cyclical dependency once I was able to merge remote schema (by removing all Hasura types from the remote schema and using primitive types instead) and then introspecting Hasura again. This seems like a problem once the "have mismatch with current graphql schema" problem is resolved.

Now, in your particular example: you are copying the types semi-automatically by generating a static schema.graphql file (prior to adding any remote schemas). You are then loading this into your remote schema. This should ideally work but because the schema generated by Hasura is fairly complex, there could be issues in compatibility checks. If this is the pattern you want to pursue then pls do create a new issue since this one has a mix of discussions.

This applies to me as I'm attempting to do exactly that by extracting the "data" types and ignoring things like queries and mutations. Doesn't work:

        "reason": "types: [ users_select_column, users_order_by, users, subscription_root, query_root, users_bool_exp ] have mismatch with current graphql schema. HINT: Types must be same.",
        "type": "remote_schema"
ZelimDamian commented 3 years ago

An update:

I was able to make it work as described in the previous comment, but only when I removed all "backwards" relationships from the tables.

Given two tables: User and Patient. Where the Patient table has a foreign key user_id pointing to the corresponding User. Hasura allows creating two relationships based on the foreign key: Patient.User (the "forward" relationship as it follows the direction of the foreign key) and User.Patient (the "backwards" relationship).

The former "forward" relationship works as expected. The "backwards" relationship breaks external schema merging with the error described above.

JCMais commented 3 years ago

@tejasmanohar The new Actions feature allows you to create custom mutations and connect it with the rest of the graph. See https://hasura.io/docs/1.0/graphql/manual/actions/action-connect.html

@tirumaraiselvan so this is not possible with remote schemas alone?

Are there plans to create a remote join for remote schemas? Just like there is for actions?

Example: Mutation UpdateUser in the remote schema returns an author_id: ID field, we are then able to create an updatedUser in Hasura using a remote join from UpdateUser.author_id to the table users.id.

If there is an issue with this, please do let me know, as this was the only one I found. If you think it's better to create a separated issue, let me know.

I find it counter-intuitive having to add an Action for this, as I'm already using a remote schema.

Edit: Looks like there is already an issue for this: https://github.com/hasura/graphql-engine/issues/5801

jgoux commented 3 years ago

I was so happy that I could reuse Hasura's types to build custom mutations with Nexus.

Then I tried to add my Nexus API as a remote schema. 😭

image

Simple example with the remote mutation : createTask(input: CreateTaskInput!): task!

My task type is strictly equivalent between Hasura and Nexus (as it's coming from Hasura and code generated for Nexus).

task on Nexus : image

task on Hasura : image

I would expect Hasura to assert that the types are strictly identical and infers the relationship automatically. 👌