aws-amplify / amplify-category-api

The AWS Amplify CLI is a toolchain for simplifying serverless web and mobile development. This plugin provides functionality for the API category, allowing for the creation and management of GraphQL and REST based backends for your amplify project.
https://docs.amplify.aws/
Apache License 2.0
89 stars 78 forks source link

RFC - Pipeline Resolver Support #430

Closed mikeparisstuff closed 1 year ago

mikeparisstuff commented 5 years ago

Pipeline Resolvers Support

This RFC will document a process to transition the Amplify CLI to use AppSync pipeline resolvers. The driving use case for this feature is to allow users to compose their own logic with the logic that is generated by the GraphQL Transform. For example, a user might want to authorize a mutation that creates a message by first verifying that the user is enrolled in the message's chat room. Other examples include adding custom input validation or audit logging to @model mutations. This document is not necessarily final so please leave your comments so we can address any concerns.

Github Issues

Proposal 1: Use pipelines everywhere

Back in 2018, AppSync released a feature called pipeline resolvers. Pipeline resolvers allow you to serially execute multiple AppSync functions within the resolver for a single field (not to be confused with AWS Lambda functions). AppSync functions behave similarly to old style AppSync resolvers and contain a request mapping template, a response mapping template, and a data source. A function may be referenced by multiple AppSync resolvers allowing you to reuse the same function for multiple resolvers. The AppSync resolver context ($ctx in resolver templates) has also received a new stash map that lives throughout the execution of a pipeline resolver. You may use the $ctx.stash to store intermediate results and pass information between functions.

The first step towards supporting pipeline resolvers is to switch all existing generated resolvers to use pipeline resolvers. To help make the generated functions more reusable, each function defines a set of arguments that it expects to find in the stash. The arguments for a function are passed by setting a value in the $ctx.stash.args under a key that matches the name of the function. Below you can read the full list of functions that will be generated by different directives.

Generated Functions

Function: CreateX

Generated by @model and issues a DynamoDB PutItem operation with a condition expression to create records if they do not already exist.

Arguments

The CreateX function expects

{
    "stash": {
        "args": {
            "CreateX": {
                "input": {
                    "title": "some title",
                },
                "condition": {
                    "expression": "attribute_not_exists(#id)",
                    "expressionNames": {
                        "#id": "id"
                    },
                    "expressionValues": {}
                }
            }
        }
    }
}

Function: UpdateX

Generated by @model and issues a DynamoDB UpdateItem operation with a condition expression to update if the item exists.

Arguments

The UpdateX function expects

{
    "stash": {
        "args": {
            "UpdateX": {
                "input": {
                    "title": "some other title",
                },
                "condition": {
                    "expression": "attribute_exists(#id)",
                    "expressionNames": {
                        "#id": "id"
                    },
                    "expressionValues": {}
                }
            }
        }
    }
}

Function: DeleteX

Generated by @model and issues a DynamoDB DeleteItem operation with a condition expression to delete if the item exists.

Arguments

The UpdateX function expects

{
    "stash": {
        "args": {
            "DeleteX": {
                "input": {
                    "id": "123",
                },
                "condition": {
                    "expression": "attribute_exists(#id)",
                    "expressionNames": {
                        "#id": "id"
                    },
                    "expressionValues": {}
                }
            }
        }
    }
}

Function: GetX

Generated by @model and issues a DynamoDB GetItem operation.

Arguments

The UpdateX function expects

{
    "stash": {
        "args": {
            "GetX": {
                "id": "123"
            }
        }
    }
}

Function: ListX

Generated by @model and issues a DynamoDB Scan operation.

Arguments

The ListX function expects

{
    "stash": {
        "args": {
            "ListX": {
                "filter": {
                    "expression": "",
                    "expressionNames": {},
                    "expressionValues": {}
                },
                "limit": 20,
                "nextToken": "some-next-token"
            }
        }
    }
}

Function: QueryX

Generated by @model and issues a DynamoDB Query operation.

Arguments

The QueryX function expects

{
    "stash": {
        "args": {
            "QueryX": {
                "query": {
                    "expression": "#hashKey = :hashKey",
                    "expressionNames": {
                        "#hashKey": "hashKeyAttribute",
                        "expressionValues": {
                            ":hashKey": {
                                "S": "some-hash-key-value"
                            }
                        }
                    }
                },
                "scanIndexForward": true,
                "filter": {
                    "expression": "",
                    "expressionNames": {},
                    "expressionValues": {}
                },
                "limit": 20,
                "nextToken": "some-next-token",
                "index": "some-index-name"
            }
        }
    }
}

Function: AuthorizeCreateX

Generated by @auth when used on an OBJECT.

Arguments

The AuthorizeCreateX function expects no additional arguments. The AuthorizeCreateX function will look at $ctx.stash.CreateX.input and validate it against the $ctx.identity. The function will manipulate $ctx.stash.CreateX.condition such that the correct authorization conditions are added.


Function: AuthorizeUpdateX

Generated by @auth when used on an OBJECT.

Arguments

The AuthorizeUpdateX function expects no additional arguments. The AuthorizeUpdateX function will look at $ctx.stash.UpdateX.input and validate it against the $ctx.identity. The function will manipulate $ctx.stash.UpdateX.condition such that the correct authorization conditions are added.


Function: AuthorizeDeleteX

Generated by @auth when used on an OBJECT.

Arguments

The AuthorizeDeleteX function expects no additional arguments. The AuthorizeDeleteX function will look at $ctx.stash.DeleteX.input and validate it against the $ctx.identity. The function will manipulate $ctx.stash.DeleteX.condition such that the correct authorization conditions are added.


Function: AuthorizeGetX

Generated by @auth when used on an OBJECT.

Arguments

The AuthorizeGetX function expects no additional arguments. The AuthorizeGetX function will look at $ctx.stash.GetX.result and validate it against the $ctx.identity. The function will return null and append an error if the user is unauthorized.


Function: AuthorizeXItems

Filters a list of items based on @auth rules placed on the OBJECT. This function can be used by top level queries that return multiple values (list, query) as well as by @connection fields.

Arguments

The AuthorizeXItems function expects $ctx.prev.result to contain a list of "items" that should be filtered. This function returns the filtered results.


Function: HandleVersionedCreate

Created by the @versioned directive and sets the initial value of an objects version to 1.

Arguments

The HandleVersionedCreate function augments the $ctx.stash.CreateX.input such that it definitely contains an initial version.


Function: HandleVersionedUpdate

Created by the @versioned directive and updates the condition expression with version information.

Arguments

The HandleVersionedUpdate function uses the $ctx.stash.UpdateX.input to append a conditional update expression to $ctx.stash.UpdateX.condition such that the object is only updated if the versions match.


Function: HandleVersionedDelete

Created by the @versioned directive and updates the condition expression with version information.

Arguments

The HandleVersionedDelete function uses the $ctx.stash.DeleteX.input to append a conditional update expression to $ctx.stash.DeleteX.condition such that the object is only deleted if the versions match.


Function: SearchX

Created by the @searchable directive and issues an Elasticsearch query against your Elasticsearch domain.

Arguments

The SearchX function expects a single argument "params".

{
    "stash": {
        "args": {
            "SearchX": {
                "params": {
                    "body": {
                        "from": "",
                        "size": 10,
                        "sort": ["_doc"],
                        "query": {
                            "match_all": {}
                        }
                    }
                }
            }
        }
    }
}

Generated Resolvers

The @model, @connection, and @searchable directives all add resolvers to fields within your schema. The @versioned and @auth directives will only add functions to existing resolvers created by the other directives. This section will look at the resolvers generated by the @model, @connection, and @searchable directives.

@model resolvers

type Post @model {
    id: ID!
    title: String
}

This schema will create the following resolvers:


Mutation.createPost

The Mutation.createPost resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Mutation.createPost.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.CreatePost = {
    "input": $ctx.args.input
})

Function 1: CreatePost

The function will insert the value provided via $ctx.stash.CreatePost.input and return the results.

Mutation.createPost.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

Mutation.updatePost

The Mutation.updatePost resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Mutation.updatePost.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.UpdatePost = {
    "input": $ctx.args.input
})

Function 1: UpdatePost

The function will update the value provided via $ctx.stash.UpdatePost.input and return the results.

Mutation.updatePost.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

Mutation.deletePost

The Mutation.deletePost resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Mutation.deletePost.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.DeletePost = {
    "input": $ctx.args.input
})

Function 1: DeletePost

The function will delete the value designated via $ctx.stash.DeletePost.input.id and return the results.

Mutation.deletePost.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

Query.getPost

The Query.getPost resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Query.getPost.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.GetPost = {
    "id": $ctx.args.id
})

Function 1: GetPost

The function will get the value designated via $ctx.stash.GetPost.id and return the results.

Query.getPost.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

Query.listPosts

The Query.listPosts resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Query.listPosts.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.ListPosts = {
    "filter": $util.transform.toDynamoDBFilterExpression($ctx.args.filter),
    "limit": $ctx.args.limit,
    "nextToken": $ctx.args.nextToken
})

Function 1: ListPosts

The function will get the value designated via $ctx.stash.ListPosts.id and return the results.

Query.listPosts.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

@connection resolvers

type Post @model {
    id: ID!
    title: String
    comments: [Comment] @connection(name: "PostComments")
}
type Comment @model {
    id: ID!
    content: String
    post: Post @connection(name: "PostComments")
}

The example above would create the following resolvers


Post.comments

The Post.comments resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Post.comments.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.QueryComments = {
    "query": {
        "expression": "#connectionAttribute = :connectionAttribute",
        "expressionNames": {
            "#connectionAttribute": "commentPostId"
        },
        "expressionValues": {
            ":connectionAttribute": {
                "S": "$ctx.source.id"
            }
        }
    },
    "scanIndexForward": true,
    "filter": $util.transform.toDynamoDBFilterExpression($ctx.args.filter),
    "limit": $ctx.args.limit,
    "nextToken": $ctx.args.nextToken,
    "index": "gsi-PostComments"
})

Function 1: QueryPosts

The function will get the values designated via $ctx.stash.QueryPosts and return the results.

Post.comments.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

Comment.post

The Comment.post resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Comment.post.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.GetPost = {
    "id": "$ctx.source.commentPostId"
})

Function 1: GetPost

The function will get the values designated via $ctx.stash.GetPost and return the results.

Comment.post.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

@searchable resolvers

type Post @model @searchable {
    id: ID!
    title: String
}

Query.searchPosts

The Query.searchPosts resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Query.searchPosts.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.SearchPosts = {
    "query": $util.transform.toElasticsearchQueryDSL($ctx.args.filter),
    "sort": [],
    "size": $context.args.limit,
    "from": "$context.args.nextToken"
})

Function 1: SearchPosts

The function will get the values designated via $ctx.stash.GetPost and return the results.

Comment.post.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

@auth resolvers

The @auth directive does not add its own resolvers but will augment the behavior of existing resolvers by manipulating values in the $ctx.stash.

@versioned resolvers

The @versioned directive does not add its own resolver but will augment the behavior of existing resolvers by manipulating values in the $ctx.stash.

Proposal 2: The @before and @after directives

There are many possibilities for how to expose pipeline functions via the transform. Defining a function of your own requires a request mapping template, response mapping template, and a data source. Using a function requires that you place that function, in order, within a pipeline resolver. Any directive(s) introduced would need to be able to accomodate both of these requirements. Here are a few options for discussion.

Before & After directives for adding logic to auto-generated model mutations

The main use case for this approach is to add custom authorization/audit/etc logic to mutations that are generated by the Amplify CLI. For example, you might want to lookup that a user is a member of a chat room before they can create a message. Currently this design only supports mutations but if you have suggestions for how to generalize this for read operations, comment below.

directive @before(mutation: ModelMutation!, function: String!, datasource: String!) ON OBJECT
directive @after(mutation: ModelMutation!, function: String!, datasource: String!) ON OBJECT
enum ModelMutation {
    create
    update
    delete
}

Which would be used like so:

# Messages are only readable via @connection fields.
# Message mutations are pre-checked by a custom function.
type Message 
  @model(queries: null)
  @before(mutation: create, function: "AuthorizeUserIsChatMember", datasource: "ChatRoomTable")
{
    id: ID!
    content: String
    room: Room @connection(name: "ChatMessages")
}
type ChatRoom @model @auth(rules: [{ allow: owner, ownerField: "members" }]) {
    id: ID!
    messages: [Message] @connection(name: "ChatMessages")
    members: [String]
}

To implement your function logic, you would drop two files in resolvers/ called AuthorizeUserIsChatMember.req.vtl & AuthorizeUserIsChatMember.res.vtl:

## AuthorizeUserIsChatMember.req.vtl **
{
    "operation": "GetItem",
    "key": {
        "id": "$ctx.args.input.messageRoomId"
    }
}

## AuthorizeUserIsChatMember.res.vtl **
#if( ! $ctx.result.members.contains($ctx.identity.username) )
  ## If the user is not a member do not allow the CreatePost function to be called next. ** 
  $util.unauthorized()
#else
  ## Do nothing and allow the CreatePost function to be called next. **
  $ctx.result
#end

The @before directive specifies which data source should be called and the order of the functions could be determined by the order of the @before directives on the model. The @after directive would work similarly except the function would run after the generated mutation logic.

Audit mutations with a single AppSync function

type Message
  @model(queries: null)
  @after(mutation: create, function: "AuditMutation", datasource: "AuditTable")
{
    id: ID!
    content: String
}
# The Audit model is not exposed via the API but will create a table 
# that can be used by your functions.
type Audit @model(queries: null, mutations: null, subscriptions: null) {
    id: ID!
    ctx: AWSJSON
}

You could then use function templates like this:

## AuditMutation.req.vtl **
## Log the entire resolver ctx to a DynamoDB table **
#set($auditRecord = {
    "ctx": $ctx,
    "timestamp": $util.time.nowISO8601()
})
{
    "operation": "PutItem",
    "key": {
        "id": "$util.autoId()"
    },
    "attributeValues": $util.dynamodb.toMapValuesJson($auditRecord)
}

## AuditMutation.res.vtl **
## Return the same value as the previous function **
$util.toJson($ctx.prev.result)

Request for comments

The goal is to provide simple to use and effective abstractions. Please leave your comments with questions, concerns, and use cases that you would like to see covered.

DanielCender commented 5 years ago

Hi @mikeparisstuff, I am presuming that these proposed features will allow more granular control of overwriting auto-generated AppSync resolvers, as currently outlined in the docs here. Curious if any of this is currently in the works.

My team is going to be implementing pipeline resolvers heavily on a current project, and I am looking forward to updates like these to allow us to control the whole pipeline from within our codebase.

I realize that there is support for constructing custom resolvers locally, per this section of the Amplify docs. Since we have been implementing pipelines to access Delta tables for use with DeltaSync, it would incredible if we could not just create our custom resolvers locally, but also easily specify those pipelines, their contents, function ordering, etc.

I might be overlooking a way to accomplish this that combines multiple techniques from the docs/issues, but if not, just wanted to put this out there, given the nature of this RFC.

Thanks!

laura-sainz-mojix-com commented 5 years ago

+1!

YikSanChan commented 5 years ago

According to Proposal 1 Generated Functions section, does it mean amplify-cli will auto generate queries look like:

// queries.js
const getPost = `query GetPost($id: ID!) {
    "stash": {
        "args": {
            "GetPost": {
                "id": $id
            }
        }
    }
}
`

and we can execute graphql operation by

API.graphql(graphqlOperation(queries.getPost, { id: postId }))
  .then(response => response.data.xxx)

What will be the response different from current version? How can we access stashed data from previous graphql operation?

YikSanChan commented 5 years ago
  1. An easy to debug pipeline resolver is also very important. The very good first step would be to allow printing in vtl such that developers can tell what happen inside pipeline resolvers by looking into cloudwatch log. See https://github.com/aws-amplify/amplify-cli/issues/652.

  2. Can we add custom pipeline resolvers by adding files under resolvers/ folder? See https://github.com/aws-amplify/amplify-cli/issues/1271. If not, this is a good feature to have.

ambientlight commented 5 years ago

@mikeparisstuff:

Looking at proposal 2, @before and @after doesn't allow to chain arbitrary multiples of pipeline resolver functions.

What do you think about making @before and @after to become generic pipeline resolver @function, directive but taking extra argument(mutation) and also position in relation to @model?

Then

type Message
  @before(mutation: create, function: "Authorize", datasource: "AuditTable")
  @model(queries: null)
  @after(mutation: create, function: "AuditMutation", datasource: "AuditTable")
{
    id: ID!
    content: String
}

becomes:

type Message
  @function(mutation: create, name: "Authorize", datasource: "AuditTable")
  @model(queries: null)
  @function(mutation: create, name: "AuditMutation", datasource: "AuditTable")
{
    id: ID!
    content: String
}

Then multiples of pipeline resolver function can be chained before and after @model's mutation resolvers providing us greater flexibility.

We can treat @function as a generic pipeline resolver building block that allows us to compose any arbitrary pipeline resolver hierarchy. In this context we can render https://github.com/aws-amplify/amplify-cli/issues/83 in as @function's with lambda datasource:

@function(name: "ComplexCompose", datasource: "complex_compose_some_magic_api_request")
@http(url: "https://somemagicapi/:dataType/:postId/:secondType")
@function(name: "ComplexResponseTransform", datasource: "complex_response_transform")
@function(name: "AddSomeSeasoning", datasource: "add_some_seasoning")

One example for use case when we might need multiple @functions after 'primary' resolver: say we need some custom business logic to run inside VPC, and those lambdas take pretty long cold starts, we can factor out logic from multiple @function to this single one, attach a keep alive cloudwatch event that will keep periodically calling this function to avoid cold starts on it and probably another @function transform past this function to transform the result into a desired form.

hisham commented 5 years ago

I like the Audit use case. How would you search through that audit log model you made? Hook it up to elastic search? How would you protect this search through @auth rules? Would the graphql transformer support this use case fully.

tafelito commented 5 years ago

If I have to query some data before doing a mutation in the same operation, Is the pipeline the solution for this too? Any plans on implementing transactional pipelines?

hisham commented 5 years ago

Also for the audit use case, from performance perspective, it seems like recording to audit table should happen in parallel to the primary mutation rather than before or after. It might be better to implement it as a lambda stream from a dynamodb table for example. I assume pipeline resolvers currently don't yet allow for async or parallel operations..

timrchavez commented 5 years ago

@mikeparisstuff

Currently this design only supports mutations but if you have suggestions for how to generalize this for read operations

I don't know enough about the implementation, but what are the challenges with making proposal 2 work for query/read operations?

Ideally we'd want to support the isFriend scenario outlined here https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-pipeline-resolvers.html

davekiss commented 5 years ago

Just tried to create and run a pipeline function that depends a result of an autogenerated resolver and realized the @after is what I'm looking for. I think adding as many functions in the SDL as you might need to complete the pipeline would be ideal as @ambientlight suggests.

ajhool commented 5 years ago

Both proposals look good.

Currently this design only supports mutations but if you have suggestions for how to generalize this for read operations, comment below.

I'm not sure why @before and @after approach wouldn't also be useful for get operations?


In Proposal 1, it appears that authentication is only provided for dynamo resolvers (ie. AuthorizeCreateX, AuthorizeUpdateX). However, now that the @function directive has been added to the API ( aws-amplify/amplify-cli#83 ) there should also be an (AuthorizeInvokeX or AuthorizeFunctionX). Custom resolver lambda functions can add security in code but preventing invocation would provide an additional layer of security that conforms to the auth groups defined throughout the schema. It would also be easier to add group logic into AuthorizeInvokeX than in lambda code.

hisham commented 5 years ago

Any estimate for when any of this RFC would start getting implemented and relative priority to other RFCs? It's seriously needed for any multi-tenancy app.

artista7 commented 5 years ago

@mikeparisstuff any updates?

artista7 commented 5 years ago

@mikeparisstuff , can we have tenatative deadline so as to decide if we should wait for the future or go for alternate approach? :)

kaustavghosh06 commented 5 years ago

@artista7 Could you take a look at the "Chaining functions" section out here - https://aws-amplify.github.io/docs/cli/graphql#function and see if it solves for your use-case?

artista7 commented 5 years ago

@kaustavghosh06 , Point is I want to put a filter in case of some mutations, for eg. createUser (but only on monday), thus I wanted to override createUser resolver to pipeline resolver instead of creating new mutation fields with different names.

nino-moreton commented 5 years ago

Creating a logger is a PITA right now until we have some form of this RFC. Current workaround we're thinking of is to create a custom mutation for each CRUD action with the help of @function directive. So basically overriding all of our models' gql mutation.

andrewbtp commented 5 years ago

I'm a big fan of prop 1. Prop 2 adds a lot to the schema if you have bigger pipelines, and it seems to offer less customization. If we the users are going to be messing with resolver pipelines, we have to dive into the pipelineFunctions folder (or whatever) anyway, so the config is fine to go there rather than built-in as directives like prop 2.

To expand on this more, I'm making pipeline resolvers for queries, deletes, updates, and creates. I don't want to have 10+ lines in my schema per model just for pipeline functions.

idanlo commented 5 years ago

can we have any updates? It has been 5 months and this is a much needed feature for an app that is not a todo list

andrewbtp commented 5 years ago

I have a Python script handling this I can stick on Github if ya want @idanlo

idanlo commented 5 years ago

@andrewbtp sure, that'll be great

andrewbtp commented 5 years ago

@idanlo https://github.com/andrewbtp/AppSyncPipelineResolverDeployer

sacrampton commented 5 years ago

@mikeparisstuff - can we get an update on this?

The @function directive lets you define Lambda functions in the amplify CLI - but there appears to be no way to define a pipeline resolvers in the CLI. It also appears unclear within the CLI how to move the default resolvers to be pipeline resolvers.

There just appears to be a gaping hole in how to support pipeline resolvers within the CLI which means that we can't implement these except via the console which is not compliant with AWS well architected requirements.

ambientlight commented 5 years ago

@sacrampton:

  1. if you chain two @function together, graphql schema will compile to a pipeline resolver.
  2. alternatively, custom resolvers https://aws-amplify.github.io/docs/cli-toolchain/graphql#custom-resolvers is a way to define your own resolvers with datasource and request/responsevtl template pair. (Personally, this is pretty practical way to use appsync, where amplify's job is just provisioning and deploying your CF stacks)
idanlo commented 5 years ago

@ambientlight that's right, you can use the @function directive to chain functions, but that only works for lambda functions, which means for every new @function you have to create a lambda function, and the good thing about pipeline resolvers is that you can use AppSync functions with the VTL language, that way configuring the code for the request and response. Also making requests to DynamoDB is much easier with VTL resolvers because they were practically built for that reason.

sacrampton commented 5 years ago

Thanks for your comments @idanlo / @ambientlight . You are correct - I want to create pipeline resolvers without Lambda - just with VTL as that is a perfect fit for my need.

Lets say I have a type called Asset then a default mutation is created "createAsset". If I want to execute another resolver after this "createAsset" then on the console this mutation is first converted to a pipeline resolver and the next resolvers are identified. I can't see any way to "convert" the default resolver into a pipeline function, then add additional functions/resolvers to allow the pipeline to be created through the CLI.

The way it looks at the moment (and I hope I'm missing something) is that the only way to take the default resolvers and add additional functions to them in a pipeline is through the console - not through the CLI.

I can't find much in the way or references / examples for any of this stuff either.

ambientlight commented 5 years ago

@idanlo, @sacrampton: I personally don't mind writing custom VTL and CF templates. It always gives you the flexibility to tweak or hack it in any way, without any need to rely on Amplify transformers abstractions.

The relevant documentation is at custom resolver. Examples are provided there. For short, every api category graphql resource folder has stacks and resolvers folders. In stacks you would throw a custom CloudFormation template referencing VTL templates you put in resolvers folder. For pipeline resolver, for you CustomResolvers.json take a look at PipelineConfig at AWS::AppSync::Resolver and AWS::AppSync::FunctionConfiguration. One example can be: building-aws-appsync-pipeline-resolvers-with-aws-cloudformation (amplify should be fine if you throw .yaml CFT in stacks folder)

idanlo commented 5 years ago

@ambientlight @sacrampton I am currently using the python script that @andrewbtp posted here which is very helpful, I run the script every time I push to AppSync using the Amplify CLI (because currently it overwrites pipelines), but you do have to create the functions in the AppSync manually

ambientlight commented 5 years ago

@idanlo: hm strange, you mean you will write custom resolvers with custom CloudFormation and when it compiles the model it removes resolvers of type pipeline from generated stacks? let me double check, by functions I mean AppSync functions, not @function lambdas.

idanlo commented 5 years ago

@ambientlight I am creating the functions through the AppSync console, there it asks me what is the resource it needs to access and I can write request and response resolvers using VTL. after that using the python script that @andrewbtp posted I can use those AppSync functions for a pipeline resolver.

I recommend that you take a look on his python script, it was extremely helpful for me.

nateiler commented 5 years ago

I've been able to incorporate pipeline functions using only velocity templates without needing to use the console at all. It required defining some additional CF and creating some additional resolvers, but I believe some updates have been made to the Amplify CLI that no longer requires the python resolver deployer script.

I'm sure I can find a few example snippets in our project; just want to make sure we're on the same page first.

ambientlight commented 5 years ago

@natailer: yes that’s what I tried describing above

@idanlo: correct this behavior is expected, your are ‘drifting’ the stacks which are managed by CloudFormation, I am not entirely sure what will happen if you keep manually tweaking AppSync in console that is managed by CF, but instead of doing that by hand you can go and do the same thing in CloudFormation (custom resolvers in amplify) it will generate a nested stack for the things you define under /stacks in your api category resource folder

I would personally discourage to tweak anything absolutely anything in console that CloudFormation manages for you via amplify. Though some resources in CF support drift detection, you can go to CloudFormation console and see what properties of resources managed were ‘drifted’ from CF defined ones.

nateiler commented 5 years ago

Here is an example from our project. A couple assumptions here: 1) Some level of comfort adding custom CF stacks 2) Some level of comfort manipulating resolver templates.

In the following example I'm using a pipeline resolver to create an entity; in this case it's a company, but we have other 'entity' types (contact, event, etc).

In a custom CF Stack, add a couple resources:

"CreateEntityFunction": {
      "Type": "AWS::AppSync::FunctionConfiguration",
      "Properties": {
        "ApiId": {
          "Ref": "AppSyncApiId"
        },
        "Name": "CreateEntityFunction",
        "DataSourceName": "EntityTable",
        "FunctionVersion": "2018-05-29",
        "RequestMappingTemplateS3Location": {
          "Fn::Sub": [
            "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/${ResolverFileName}",
            {
              "S3DeploymentBucket": {
                "Ref": "S3DeploymentBucket"
              },
              "S3DeploymentRootKey": {
                "Ref": "S3DeploymentRootKey"
              },
              "ResolverFileName": {
                "Fn::Join": [
                  ".",
                  ["Entity", "Mutation", "Function", "Create", "req", "vtl"]
                ]
              }
            }
          ]
        },
        "ResponseMappingTemplateS3Location": {
          "Fn::Sub": [
            "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/${ResolverFileName}",
            {
              "S3DeploymentBucket": {
                "Ref": "S3DeploymentBucket"
              },
              "S3DeploymentRootKey": {
                "Ref": "S3DeploymentRootKey"
              },
              "ResolverFileName": {
                "Fn::Join": [
                  ".",
                  ["Entity", "Mutation", "Function", "item", "res", "vtl"]
                ]
              }
            }
          ]
        }
      }
    },    
    "CreateCompanyEntityResolver": {
      "Type": "AWS::AppSync::Resolver",
      "Properties": {
        "ApiId": {
          "Ref": "AppSyncApiId"
        },
        "FieldName": "createCompanyEntity",
        "TypeName": "Mutation",
        "Kind": "PIPELINE",
        "PipelineConfig": {
          "Functions": [
            {
              "Fn::GetAtt": ["CreateEntityFunction", "FunctionId"]
            }
          ]
        },
        "RequestMappingTemplateS3Location": {
          "Fn::Sub": [
            "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/${ResolverFileName}",
            {
              "S3DeploymentBucket": {
                "Ref": "S3DeploymentBucket"
              },
              "S3DeploymentRootKey": {
                "Ref": "S3DeploymentRootKey"
              },
              "ResolverFileName": {
                "Fn::Join": [
                  ".",
                  ["Entity", "Mutation", "Pipeline", "Company", "req", "vtl"]
                ]
              }
            }
          ]
        },
        "ResponseMappingTemplateS3Location": {
          "Fn::Sub": [
            "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/${ResolverFileName}",
            {
              "S3DeploymentBucket": {
                "Ref": "S3DeploymentBucket"
              },
              "S3DeploymentRootKey": {
                "Ref": "S3DeploymentRootKey"
              },
              "ResolverFileName": {
                "Fn::Join": [
                  ".",
                  ["Entity", "Mutation", "Pipeline", "item", "res", "vtl"]
                ]
              }
            }
          ]
        }
      },
      "DependsOn": ["CreateEntityFunction"]
    }

CreateEntityFunction is the re-usable pipeline function (that we'll use for other types of entities) and CreateCompanyEntityResolver is the mutation resolver (which has been converted from a standard resolver to a pipeline resolver).

The resolvers defined in CreateCompanyEntityResolver are run before/after and after the pipeline functions. So in our case we're simply stashing the entity type.

Entity.Mutation.Pipeline.Company.req.vtl (BEFORE)

$util.qr($ctx.stash.put("type", "CompanyEntity"))
{}

For the function request we create a new DynamoDb item (removed logic not relevant to this example) Entity.Mutation.Function.Create.req.vtl

## Get the entity type from stash
#set($type = $ctx.stash.get("type"))
#if ($util.isNullOrBlank($type))
    $util.error("Invalid entity type.")
#end

## REMOVED APP SPECIFIC LOGIC

{
  "version": "2017-02-28",
  "operation": "PutItem",
  "key": #if( $modelObjectKey ) $util.toJson($modelObjectKey) #else {
  "id":   $util.dynamodb.toDynamoDBJson($util.defaultIfNullOrBlank($ctx.args.input.id, $util.autoId()))
} #end,
  "attributeValues": $util.dynamodb.toMapValuesJson($context.args.input),
  "condition": {
      "expression": "attribute_not_exists(#id)",
      "expressionNames": {
          "#id": "id"
    }
  }
}

For the function response, we look for errors and modify/return the result (removed logic not relevant to this example) Entity.Mutation.Function.item.res.vtl

#if($ctx.error)
    $util.error($ctx.error.message, $ctx.error.type)
#end

## REMOVED APP SPECIFIC LOGIC

$util.toJson($ctx.result)

Finally, the output the result. Entity.Mutation.Pipeline.item.res.vtl (AFTER)

$util.toJson($context.result)

Hopefully this is helpful. The naming convention of our templates do not follow the default amplify naming convention because we found it hard to group and keep track of them as the project grows.

andrewbtp commented 5 years ago

The Python script is a little easier to use from scratch, but that CFT is definitely the correct way to do this and is safer for production systems.

For anybody using the Python script, eventually I'll probably make it spit out a CFT (or many individual CFTs, need to look into it more) and then just use that going forward.

Ricardo1980 commented 4 years ago

Hello @nateiler Regarding your lastest post. Is that how I have to implement pipeline resolvers manually? (Given that amplify cli cannot do it and I don’t see docs about that) Thanks.

martinjuhasz commented 4 years ago

If i understand @nateiler example correctly, this works if i have a completely custom resolver, right? Am i able to add custom pipeline functions to an auto-generated resolver?

ryanhollander commented 4 years ago

@martinjuhasz @Ricardo1980 That is correct. Using the "escape hatch" you can define custom functions, data sources, and resolvers within a JSON template. (typically ...api/%apiname%/stacks/CustomResolvers.json). You can specify the functions and resolver vtl and place that in .../resolvers/... and reference them in the JSON file. You can create pipeline resolvers this way too. While you can overload resolvers that are auto-generated, you can't turn them into Pipeline resolvers, those have to be defined custom. In your graphql you can use @model to specify which queries/mutations/subscriptions you want the auto gen to create. What I often do is specify an autocreate resolver, use "amplify api gql-compile" to auto-gen the templates, then copy those to use in my pipeline resolver functions.

There is documentation here: https://aws-amplify.github.io/docs/cli-toolchain/graphql#overwriting-resolvers https://aws-amplify.github.io/docs/cli-toolchain/graphql#custom-resolvers

Ricardo1980 commented 4 years ago

Thanks @ryanhollander But still, I don't have a clear idea of the files and content I have to use and the documentation does not mention it. Can you review this request I opened 2 days: https://github.com/aws-amplify/amplify-cli/issues/3321 Perhaps you can show me an example of all the files I need. Thanks a lot!

iShavgula commented 4 years ago

@Ricardo1980 Let's say, we have a query called doSomething, which

First, You need to define configurations for your functions and pipeline resolver which uses those functions. You can add those into Resources of CustomResources.json found in _amplify/backend/api/YOUR_APINAME/stacks.

"FirstActionFunction": {
    "Type": "AWS::AppSync::FunctionConfiguration",
    "Properties": {
        "ApiId": {
        "Ref": "AppSyncApiId"
        },
        "Name": "FirstActionFunction",
        "DataSourceName": "YOUR_OBJECTTable",
        "FunctionVersion": "2018-05-29",
        "RequestMappingTemplateS3Location": {
        "Fn::Sub": [
            "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.firstAction.req.vtl",
            {
            "S3DeploymentBucket": {
                "Ref": "S3DeploymentBucket"
            },
            "S3DeploymentRootKey": {
                "Ref": "S3DeploymentRootKey"
            }
            }
        ]
        },
        "ResponseMappingTemplateS3Location": {
        "Fn::Sub": [
            "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.firstAction.res.vtl",
            {
            "S3DeploymentBucket": {
                "Ref": "S3DeploymentBucket"
            },
            "S3DeploymentRootKey": {
                "Ref": "S3DeploymentRootKey"
            }
            }
        ]
        }
    }
},
"SecondActionFunction": {
    "Type": "AWS::AppSync::FunctionConfiguration",
    "Properties": {
        "ApiId": {
        "Ref": "AppSyncApiId"
        },
        "Name": "SecondActionFunction",
        "DataSourceName": "ANOTHER_OBJECTTable",
        "FunctionVersion": "2018-05-29",
        "RequestMappingTemplateS3Location": {
        "Fn::Sub": [
            "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Mutation.secondAction.req.vtl",
            {
            "S3DeploymentBucket": {
                "Ref": "S3DeploymentBucket"
            },
            "S3DeploymentRootKey": {
                "Ref": "S3DeploymentRootKey"
            }
            }
        ]
        },
        "ResponseMappingTemplateS3Location": {
        "Fn::Sub": [
            "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Mutation.secondAction.res.vtl",
            {
            "S3DeploymentBucket": {
                "Ref": "S3DeploymentBucket"
            },
            "S3DeploymentRootKey": {
                "Ref": "S3DeploymentRootKey"
            }
            }
        ]
        }
    }
},

"DoSomethingPipelineResolver": {
    "Type": "AWS::AppSync::Resolver",
    "Properties": {
      "ApiId": {
        "Ref": "AppSyncApiId"
      },
      "Kind": "PIPELINE",
      "PipelineConfig": {
        "Functions": [
          {
            "Fn::GetAtt": ["FirstActionFunction", "FunctionId"]
          },
          {
            "Fn::GetAtt": ["SecondActionFunction", "FunctionId"]
          }
        ]
      },
      "TypeName": "Query",
      "FieldName": "doSomething", 
      "RequestMappingTemplateS3Location": {
        "Fn::Sub": [
          "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.doSomethingPipeline.req.vtl",
          {
            "S3DeploymentBucket": {
              "Ref": "S3DeploymentBucket"
            },
            "S3DeploymentRootKey": {
              "Ref": "S3DeploymentRootKey"
            }
          }
        ]
      },
      "ResponseMappingTemplateS3Location": {
        "Fn::Sub": [
          "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.doSomethingPipeline.res.vtl",
          {
            "S3DeploymentBucket": {
              "Ref": "S3DeploymentBucket"
            },
            "S3DeploymentRootKey": {
              "Ref": "S3DeploymentRootKey"
            }
          }
        ]
      }
    },
    "DependsOn": ["FirstActionFunction", "SecondActionFunction"]
}

As seen above, you will have to configure your BEFORE and AFTER mapping templates as well:

All mentioned .vtl files should be added under _amplify/backend/api/YOUR_APINAME/resolvers

Hope this helps, good luck.

Ricardo1980 commented 4 years ago

@iShavgula Thanks a lot, it is working!

BTW, is "DependsOn": ["FirstActionFunction", "SecondActionFunction"] required? If I remove that, it is working. Reading https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-dependson.html it seems that perhaps it is not needed due to an implicit dependency. Don't know. Thanks.

bmilesp commented 4 years ago

I also agree with @ambientlight regarding changing the @before and @after directives to a more chainable directive (quoted from @ambientlight above):

type Message
   @function(mutation: create, name: "Authorize", datasource: "AuditTable")
   @model(queries: null)
   @function(mutation: create, name: "AuditMutation", datasource: "AuditTable")
{
    id: ID!
    content: String
}

the pipeline chain that includes models could become very robust.

davekiss commented 4 years ago

This RFC was opened over a year ago. I understand this is likely not a trivial undertaking, but are there any updates on roadmap/progress/implementation from the AWS perspective?

ambroserb3 commented 4 years ago

+1

etaylor23 commented 4 years ago

+1 - hope this makes it soon.

mrgrue commented 4 years ago

+1, especially the resolver function chaining.

mrgrue commented 4 years ago

I noticed that AppSync now supports resolvers as Lambdas: https://aws.amazon.com/blogs/mobile/appsync-direct-lambda/

Exposing this in Amplify would mean that we wouldn't have to worry about VTL function chaining.

Is that a possibility that would fulfill this RFC?

mmccall10 commented 4 years ago

I like what @mrgrue suggested. I'd be more than happy to use lambda resolvers for pipeline resolvers today. In fact landing a before directive witih a fuction option would satify 99% of what I would use pipeline resolvers for.

@before(mutation: create, function: "AuthorizeUserIsChatMember")
benmj commented 3 years ago

I like what @mrgrue suggested. I'd be more than happy to use lambda resolvers for pipeline resolvers today. In fact landing a before directive witih a fuction option would satify 99% of what I would use pipeline resolvers for.

Agreed @mmccall10 - this would be perfect and satisfy many use cases.

@mikeparisstuff would you be able to give a rough idea of how this stacks up amongst the priorities of the Amplify team?

iyz91 commented 3 years ago

@mikeparisstuff @renebrandel Is there any update on this? Approaching 2 years now. In your opinion, would the amplify-cli play nice with additional CF/SAM/CDK implementations to get around these issues, or would they clash?

ronaldocpontes commented 3 years ago

@mikeparisstuff @renebrandel this would massively help on our multi-tenancy use case and it is currently a blocker for our adoption of Amplify as our main stack. Is anyone still working on this?