maoosi / prisma-appsync

⚡ Turns your ◭ Prisma Schema into a fully-featured GraphQL API, tailored for AWS AppSync.
https://prisma-appsync.vercel.app
BSD 2-Clause "Simplified" License
224 stars 19 forks source link

Help: Large schema in AppSync is slow #175

Closed WuDo closed 3 months ago

WuDo commented 3 months ago

Prisma-AppSync generate to us ~25000 rows grahpql schema. AppSync now has ~600ms delay before it calls prisma lambda function. When we reduce the schema (by removing some models), the delay is shorter.

Do someone please has any advice? To have response around ~800ms is too long for each request.

We have 41 models with relations. And disable unused queries and mutations. And all subsriptions.

Example timeline: image

maoosi commented 3 months ago

Hey @WuDo, 800ms is indeed quite long.

Here are a few recommendations to help you improve performance. You might already be doing some of it — so take it with a grain of salt:

  1. Trim your generated GraphQL schema: Reduce the size of your schema by using @gql directives to remove unnecessary parts. This can make a big difference. For example:

    /// @gql(mutations: null, subscriptions: null, queries: { list: null, count: null })

    More info here: https://prisma-appsync.vercel.app/features/gql-schema.html

  2. Reduce schema complexity: AWS AppSync struggles with very large schemas, and 41 models is a lot. Consider breaking your schema into smaller parts. Note that Prisma-AppSync isn’t designed for dealing with multiple schemas, so this might need extra custom work.

  3. Indexing and pagination: Make sure your database has indexes to speed up queries. Using pagination can also help, especially for list operations. Use the built-in skip and take parameters to paginate your queries. You can also change the default pagination in Prisma-AppSync like this:

    const prismaAppSync = new PrismaAppSync({
       defaultPagination: 20 // default is 50
    });
  4. Enable caching: If you can afford it, turn on caching in AppSync. This can make repeated queries faster. More info here: AWS AppSync Caching.

  5. Optimize query complexity: Make sure your queries are simple and only ask for the data you really need, especially when dealing with relational data.

  6. Adjust lambda resolver memory and prevent cold starts: Increase the memory allocated to your Lambda resolvers to speed up execution. Also, prevent cold starts by using AWS Lambda Provisioned Concurrency to keep some instances warm.

Hopefully, this helps. I will also add a task to investigate performance for larger schema sizes. You’re the first to raise this issue, so it's not something we've looked into before. It might be worth checking if Prisma-AppSync is the cause and if we can make improvements on our end.

It would also help if you could share more details on specific queries and models.

WuDo commented 3 months ago

Thank you for your reply. Indeed we are doing all of this. By try and error, we found the problem is indeed in AppSync and large schema. Even when datasource is dummy lambda, the larger the schema means larger delay call to the lambda.

Now I am trying to trim schema as much as possbile, but ...

In code #packages/generator/src/schema.ts for generating nested inputs, there is ?.filter(field => !field?.isReadOnly). How do I define field as readOnly? I haven't found it in this or prisma documentation. Only for ts schema creation.

maoosi commented 3 months ago

In code #packages/generator/src/schema.ts for generating nested inputs, there is ?.filter(field => !field?.isReadOnly). How do I define field as readOnly? I haven't found it in this or prisma documentation. Only for ts schema creation.

isReadOnly is not something you can manually define. It comes directly from the Prisma AST, returned from the Prisma engine, based on your Prisma Schema. Although I don't remember the exact rules, I believe that applying Prisma directives like @default(autoincrement()) or @updatedAt would automatically set a field as readOnly, as these fields are not expected to be manually set by the user.

WuDo commented 3 months ago

Thank you for the replies. Nothing to solve here anymore, it is our battle now.