aws-amplify / amplify-category-api

The AWS Amplify CLI is a toolchain for simplifying serverless web and mobile development. This plugin provides functionality for the API category, allowing for the creation and management of GraphQL and REST based backends for your amplify project.
https://docs.amplify.aws/
Apache License 2.0
88 stars 76 forks source link

Append custom resolvers to auto generated DDB resolvers #129

Open cliren opened 2 years ago

cliren commented 2 years ago

Is this feature request related to a new or existing Amplify category?

function

Is this related to another service?

No

Describe the feature you'd like to request

GIVEN Todo model, amplify auto generates createTodo mutation attached to DynamoDB resolver. Allow side effects on autogenerated mutations to perform custom operations. Example: Send an email (lambda function) after createTodo mutation. Currently there is no option to perform side affects on auto generated operations.

Describe the solution you'd like

Since all resolvers are pipeline resolvers, provide a way to append, prepend user-defined resolvers during schema design.

Auto-generated mutation:

Type Mutation {
  createTodo(input:CreateTodoInput): Todo
}

Proposal

  1. Append to auto-generated resolver - executes after generated resolver
# @appendResolvers will append any user defined @function resolvers onto amplify generated resolvers.
# pipelineResolvers = [originalDDBCreateTodo, FunctionName-${env}]
# we need some kind of @ts-ignore to ignore compile time error on a future generated input ie, CreateTodoInput
Type Mutation {
  @appendResolvers
  createTodo(@ts-ignore input:CreateTodoInput): Todo  @function(name: “FunctionName-${env}”)
}
  1. Prepend to auto-generated resolver - executes before generated resolver
# @prependResolvers will prepend any user defined @function resolvers onto amplify generated resolvers.
# pipelineResolvers = [FunctionName-${env}, originalDDBCreateTodo]
# we need some kind of @ts-ignore to ignore compile time error on a future generated input ie, CreateTodoInput
Type Mutation {
  @prependResolvers
  createTodo(@ts-ignore input:CreateTodoInput): Todo  @function(name: “FunctionName-${env}”)
}

Describe alternatives you've considered

  1. Described solution for this feature request can be achieved using AppSync console manually - doesn't work in a CI/CD environment where manual change are overwritten on deployments.
  2. Move DDB resolvers to lambda - this is more manual work, slows down development beats the purpose of amplify.

Additional context

This is very important to our requirements to have side effects and simplify developer workflow to make it more intuitive by generating side effects from schema.

Is this something that you'd be interested in working on?

cjihrig commented 2 years ago

Please take a look at https://docs.amplify.aws/cli/graphql/custom-business-logic/. Amplify supports a few ways of extending, overriding, or otherwise customizing resolvers.

cliren commented 2 years ago

I reviewed existing ways to extending unfortunately none of them address the problem I described. I will try to restate the problem in simple terms:

Requirement: Insert a record into Todo DDB table and send an email after insertion is complete. Should take advantage of Amplify's code generation, below

Manual Steps:

Modify auto-generated createTodo DDB pipeline resolver using AppSync console to add a new function:

  1. Add a new function to pipeline resolver which invokes sendEmail lambda function.

Result: Creates Todo record and invokes sendEmail function.

cjihrig commented 2 years ago

It sounds like you want to extend an Amplify-generated resolver. That sounds like the use case covered here. Could you invoke your Lambda from a postDataLoad slot?

cliren commented 2 years ago

@cjihrig Thanks for your comment. postDataLoad is a vtl, can it invoke lambda, please let me know if you have an example.

cjihrig commented 2 years ago

AppSync supports Lambda resolvers, so it should work. Check out this tutorial.

cliren commented 2 years ago

@cjihrig I am aware of lambda resolvers but our requirement is to have a single generated mutation (createTodo) attached to pipeline resolver (executes auto generated DDB followed in invoking custom lambda). This is an important requirement for us to declaratively achieve it to improve time to market.

cjihrig commented 2 years ago

@cliren you should be able to use a postDataLoad slot to invoke Lambda.

cliren commented 2 years ago

@cjihrig That will be awesome, can you help me with an example?

cjihrig commented 2 years ago

Please check out the conversation in https://github.com/aws-amplify/amplify-cli/issues/9623. That should cover most of what you're trying to accomplish (adding a resolver slot and configuring the resource). If you still have implementation questions, I encourage you to check out the Amplify Discord server or Stack Overflow.

cliren commented 2 years ago

Example is not clear enough, I didn't find any references on how to invoke a lambda from postDataLoad slot. Reopening to clarify with an example if not, explore recommended solution option.

alexboulay commented 2 years ago

Hi @cliren ! I am also trying to make this work too! I did create a separate issue in hopes of getting some traction, you might want to follow it. https://github.com/aws-amplify/amplify-category-api/issues/687

I really think this feature is important and of great value, it's unfortunate that there no documentation supporting it.

sundersc commented 2 years ago

Reopening based on offline conversation with @cliren. The provided steps didn't solve the original request.

cliren commented 2 years ago

Any update on this?

endymion commented 1 year ago

Does anyone know if there is any workable way to do this? It seems like a common pattern and I'm surprised that this is so difficult with Amplify.

Two simple examples of use cases:

In both of those examples, we want to use the Amplify code generation. Because that's part of why we're using Amplify. We want to extend the generated code, we don't want to replace it. If we chain @function directives together in the schema then we get a pipeline resolver that can call more than one function -- but we would have to reinvent the wheel and reimplement the CRUD operations ourselves as Lambda functions. We do not want to waste our time on that and we don't want to take the risk of introducing a bug related to _version or _deleted or other DataStore attributes.

If we instead do it by extending the existing VTL templates that Amplify generated, then we can't call our Lambda functions. Because nobody seems to know how to do that.

I tried copying generated VTL code from a @function directive into my own VTL templates and I tried to use that to insert a "HELLO, WORLD!" Lambda function into the pipeline with the generated VTL resolver from Amplify. I failed.

Can anyone provide any working example?

endymion commented 1 year ago

There are some hints here: https://github.com/aws-amplify/amplify-category-api/issues/687

But that's still not a working example, since nobody seems to know how to get the function ID.

danrivett commented 3 weeks ago

We're also looking to do this, and our use case is very similar to originally reported request, but possibly even simpler:

When creating a User model managed by Amplify and persisted in Dynamo DB, we want to add a precondition check to query DynamoDB by the email address passed into the createUser mutation to verify it isn't already in use.

We were planning on doing that by doing an earlier PutItem with an attribute_not_exists condition on a separate UserEmail table where the primary key of that table is the email. We want to do this as we don't want to have the primary key of the User itself being the email but a UUID so it's immutable (as we allow the user to change their email address).

This would allow us to prevent creating users with the same email address.

So basically we want to add an additional DDB resolver into the createUser pipeline with a separate DDB table datasource.

I'm going to try the suggestions in #687 but I'm not convinced it will work, and it would be great for Amplify to support my use case natively without a lot of complicated custom coding as preventing duplicate emails or other business identifiers seems like a fairly common requirement.

I'm happy to create a separate ticket (as related but different to this) but based on this ticket and #687 I'm not sure it's worthwhile?

alexboulay commented 3 weeks ago

We're also looking to do this, and our use case is very similar to originally reported request, but possibly even simpler:

When creating a User model managed by Amplify and persisted in Dynamo DB, we want to add a precondition check to query DynamoDB by the email address passed into the createUser mutation to verify it isn't already in use.

We were planning on doing that by doing an earlier PutItem with an attribute_not_exists condition on a separate UserEmail table where the primary key of that table is the email. We want to do this as we don't want to have the primary key of the User itself being the email but a UUID so it's immutable (as we allow the user to change their email address).

This would allow us to prevent creating users with the same email address.

So basically we want to add an additional DDB resolver into the createUser pipeline with a separate DDB table datasource.

I'm going to try the suggestions in #687 but I'm not convinced it will work, and it would be great for Amplify to support my use case natively without a lot of complicated custom coding as preventing duplicate emails or other business identifiers seems like a fairly common requirement.

I'm happy to create a separate ticket (as related but different to this) but based on this ticket and #687 I'm not sure it's worthwhile?

Since you are targeting a DDB data source, it will work, at least in v1. I think v2 has zero support for this unfortunately. If you are using v1 and need help adding that step in the generated pipeline, I can provide an example.

danrivett commented 3 weeks ago

Since you are targeting a DDB data source, it will work, at least in v1. I think v2 has zero support for this unfortunately. If you are using v1 and need help adding that step in the generated pipeline, I can provide an example.

That's vey encouraging, thanks Alex. When you say v1 and v2 I assume you're referring to "Gen 1" vs "Gen 2". If so, we're still using Gen 1, so that should work.

If you're able to provide a basic example, that would be amazing, and I'd definitely appreciate it. But if that's too time consuming for you currently, I will try and give it a go based on the comments here and in #687 and see how I get on.