Open mattiLeBlanc opened 5 years ago
@mattiLeBlanc Are you referring to a lambda environment variable or a parameter? Currently in AppSync you can pass the table name as a field in the schema otherwise you'll need to specify the table name in the request mapping template. This seems similar to this following feature request aws-amplify/amplify-category-api#439 . I'll review this issue with the team as well.
My Function resolver (Appsync Pipeline) uses a BatchPutItem
:
#set($postsdata = [])
#foreach($id in ${ctx.args.groups})
#set($item = {
"pk": $id,
"sk": "POST:POST_ID=$ctx.stash.postId",
"type": "POST_IN_GROUP",
"title": $ctx.args.title
})
$util.qr($postsdata.add($util.dynamodb.toMapValues($item)))
#end
{
"version" : "2018-05-29",
"operation" : "BatchPutItem",
"tables" : {
"coralconsole_$ctx.stash.env": $utils.toJson($postsdata)
}
}
and as you can see I am using a stashed env variable at the table name in an attempt to make this work.
However, in Cloudformation we already have an ENV variable available so it might be possible to expose that into the $ctx object so that we don't have to a call a Lambda function in a pipeline to be able to specify the unique environment table name.
I also have same issue. On top of that how do I add table name when I am testing api locally?
👍 Also having issues with Batch*Item operations - table name differs per environment. The resolver pipeline has an association with the DataSource. Why is table name inherent for Query, PutItem, GetItem, but needed for BatchItem operations? I'd rather not shell out to a lambda to evaluate.
workaround of a pipeline where you use a lambda to import the environment variables into the stash will impact performance in bigger appsync apis.
with batchGetItem you really see that appsync is still in it's beginning stages.
really hope that aws will add environment variables for mapping resolvers. adding the option for getting the type name and field name in the resolver would also be great.
and remove the BS option that you need to set the database name again, even with a datasource with that database added, that is just a straight up bug in aws appsync.
One way we resolved this is by using the AWS CDK to provision our Cloud Resources. When we build our DIST, we read the templates of the resolvers and inject the database name. Then that is deployed. Works pretty well.
@mattiLeBlanc hmm that could be a good workaround. could you share some of that code that you made with AWS CDK to accomplish that?
I’ll see if I can make an extract of our setup. To be continued
On Fri, 20 Dec 2019 at 19:10, robboerman2 notifications@github.com wrote:
@mattiLeBlanc https://github.com/mattiLeBlanc hmm that could be a good workaround. could you share some of that code that you made with AWS CDK to accomplish that?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <aws-amplify/amplify-category-api#408?email_source=notifications&email_token=ABIKJK5L63ANSXJRYWOIEELQZR4WJA5CNFSM4H5HYCX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHMHJFQ#issuecomment-567833750>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABIKJKZM4FY6T3DQZSXVPNDQZR4WJANCNFSM4H5HYCXQ .
@mattiLeBlanc ok, waiting patiently on your response :)
Hi Rob,
Well it is a bit hard to give you our full CDK implementation because we haven't open sourced it (yet). Still in development.
But the bit where we doing the injection is where we define the template for a resolver:
/**
* Add a Resolver to the API
*/
public addResolver(config: ResolverConfig) {
const options: any = {
apiId: this.api.attrApiId,
typeName: config.type,
fieldName: config.name,
requestMappingTemplate: this.addEnv(ResolverService.Instance.resolvers[ config.type ][ `${config.template}-req` ]),
responseMappingTemplate: ResolverService.Instance.resolvers[ config.type ][ `${config.template}-res` ]
};
if (config.kind === ResolverKind.PIPELINE && config.pipelineFunctions && Array.isArray(config.pipelineFunctions)) {
options.kind = ResolverKind.PIPELINE;
options.pipelineConfig = {
functions: []
};
config.pipelineFunctions.forEach(name => {
options.pipelineConfig.functions.push(this.pipelineFunctions[ `${name}` ].attrFunctionId);
});
} else {
options.dataSourceName = config.dataSourceName;
}
return new CfnResolver(this, `Resolver_${config.name}`, options);
}
The important bit here is the this.addEnv
which is used at the requestMappingTemplate
property.
This function is nothing more then an concat function:
protected addEnv(template: string) {
return `#set($env=${JSON.stringify(this.resolverEnvironment)})\n${template}`;
}
The resolverEnvironment
is a property of an AppSync Construct (class) that creates a AWS Resource using the Constructs (check the CDK example for Typescript).
So when you deploy your API for an environment (local, dev or staging etc..) it will automatically inject the $env
variable in your template.
@mattiLeBlanc thanks for the example, this will help.
@mattiLeBlanc thanks for the example, this will help.
I hope it does. We found implementing the CDK pretty cumbersome at the start, especially with a bigger project with 3 stacks and one root stack. But I hope you will figure it out. Otherwise, just ask me in this thread.
@mattiLeBlanc I was unable to find resolverEnvironment anywhere in https://github.com/aws-samples/aws-cdk-examples, or any of the docs, are you referring to another git repo/cdk constructs example code?
Adding an update here that as of now AppSync does not support adding environment variables into resolver functions. We are looking at other ways we can address this. We also welcome any PRs or discussions on potential solutions on this.
+1 for this feature
Recently was using Amplify+AppSync and had to create custom resolvers for BatchPutItem
. Works well when deployed, but because of the table name being different when I develop locally with amplify mock api
the resolver becomes essentially useless.
@mattiLeBlanc I was unable to find resolverEnvironment anywhere in https://github.com/aws-samples/aws-cdk-examples, or any of the docs, are you referring to another git repo/cdk constructs example code?
Sorry for the late reply:
resolverEnvironment
is something we added to our own Stack, so it is not a standard property you would find like region
or account
.
We get our environment from process.env.ENV
and we set it in Bitbucket (deploy variables) or in our local terminal env variables.
Does that make sense?
+1 for that feature. Any news on that?
Thx and all the best!
yes already supported, trough substitutions
yes already supported, trough substitutions
May you please provide some more details - also an example would be great. Really appreciate your support!
I'm also vouching for that. Having to manage these table names in resolver is a real pain in the butt. People don't think about it and push the resolvers code to git with the table names changed all the time.
There should be an easy way to get the table names in the given environment, that would be a life saver.
And don't tell me, well just ask people not to push these modified files, you know people, they forgot as soon as you let them go. That's human nature's...
+1 :+1:
Pretty much a deal-breaker feature that's missing at the moment.
Hey, Any news with that? Does someone has been successful putting the APPSYNC_ID and the ENV in the VTL in params?
Much Thanks!
My very easy and very unsophisticated workaround is to create multiple fields for each env with hard-coded values.
Hey, Any news with that? Does someone has been successful putting the APPSYNC_ID and the ENV in the VTL in params?
Much Thanks!
@jonperryxlm came up with a fix in aws-amplify/amplify-cli#1946 where you can specify an additional function that feeds the api id and env into the stash which worked for me. The relevant bits 👇 - it requires setting up a Pipeline resolver though.
In CustomResources.json
"addEnvVariablesToStash": {
"Type": "AWS::AppSync::FunctionConfiguration",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"DataSourceName": "NONE",
"Description": "Sets $ctx.stash.env to the Amplify environment and $ctx.stash.apiId to the Amplify API ID",
"FunctionVersion": "2018-05-29",
"Name": "addEnvVariablesToStash",
"RequestMappingTemplate": "{\n \"version\": \"2017-02-28\",\n \"payload\": {}\n }",
"ResponseMappingTemplate": {
"Fn::Join": [
"",
[
"$util.qr($context.stash.put(\"env\", \"",
{ "Ref" : "env" },
"\"))\n$util.qr($context.stash.put(\"apiId\", \"",
{ "Ref": "AppSyncApiId" },
"\"))\n$util.toJson($ctx.prev.result)"
]
]
}
}
},
And then in your resolver that needs it
{
"version" : "2018-05-29",
"operation" : "BatchGetItem",
"tables" : {
"MyTablename-${ctx.stash.apiId}-${ctx.stash.env}": {
"keys": $util.toJson($ids),
"consistentRead": true
}
}
}
I don't think it's good practice to use stage variables as part of your table naming convention. Rather, it is recommended to use separate AWS accounts for all stages with AWS Organizations.
Thanks for the workaround @wai-chuen. I want to add a +1 for this functionality. We also have tables with the ApiID and env name in their name ... LATER EDIT: $context need to be replaced with $ctx @aws As someone said before for Batch*Item operations you need the table name. This means that in order to reuse a vtl template for BatchDeleteItem for example you need to have the Table name available in the $ctx.stash or somewhere. Currently I am creating separate Pipeline resolvers for each entity type and separate templates, but If I would've had the table name available in the context I could use only one template.
This solution is ridiculous. We still don't have support for passing these variables directly to a resolver?
It would be nice to have this feature
A nice to have feature X2, meanwhile, the other workaround is to create a lambda function that makes the batch operation and call it from the API
Without this feature it makes it immensely complicated and not scalable to build batch operations or any other custom resolvers in AppSync. The tables need to be hardcoded which is an absolute no-go.
I put this into the Resources object in stacks/CustomResources.json as I wanted to use Pipeline resolver:
"Resources": {
"EmptyResource": {
"Type": "Custom::EmptyResource",
"Condition": "AlwaysFalse"
},
"AddEnvVariablesToStash": {
"Type": "AWS::AppSync::FunctionConfiguration",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"DataSourceName": "NONE",
"Description": "Sets $ctx.stash.env to the Amplify environment and $ctx.stash.apiId to the Amplify API ID",
"FunctionVersion": "2018-05-29",
"Name": "AddEnvVariablesToStash",
"RequestMappingTemplate": "{\n \"version\": \"2017-02-28\",\n \"payload\": {}\n }",
"ResponseMappingTemplate": {
"Fn::Join": [
"",
[
"$util.qr($ctx.stash.put(\"env\", \"",
{
"Ref": "env"
},
"\"))\n$util.qr($ctx.stash.put(\"apiId\", \"",
{
"Ref": "AppSyncApiId"
},
"\"))\n$util.toJson($ctx.prev.result)"
]
]
}
}
},
"FunctionQueryBatchFetchTodo": {
"Type": "AWS::AppSync::FunctionConfiguration",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"DataSourceName": "TodoTable",
"FunctionVersion": "2018-05-29",
"Name": "FunctionQueryBatchFetchTodo",
"RequestMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.batchFetchTodo.req.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
},
"ResponseMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.batchFetchTodo.res.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
}
}
},
"PipelineQueryBatchResolver": {
"Type": "AWS::AppSync::Resolver",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"Kind": "PIPELINE",
"PipelineConfig": {
"Functions": [
{
"Fn::GetAtt": [
"AddEnvVariablesToStash",
"FunctionId"
]
},
{
"Fn::GetAtt": [
"FunctionQueryBatchFetchTodo",
"FunctionId"
]
}
],
"TypeName": "Query",
"FieldName": "batchFetchTodo",
"RequestMappingTemplate": "{}",
"ResponseMappingTemplate": "$util.toJson($ctx.result)"
}
}
},
"NONE": {
"Type": "AWS::AppSync::DataSource",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"Name": "NONE",
"Type": "NONE"
}
}
},
I am using "aws-amplify": "^4.3.12" and "aws-amplify-react-native": "^6.0.2". Running amplify -version in terminal prints 5.1.0.
I am still getting these errors: No matter if I remove 'NONE' definition or keep it, I still receive this error:
No data source found named NONE (Service: AmazonDeepdish; Status Code: 404; Error Code: NotFoundException; Request ID: 4497b926-78ae-464a-bad0-f98a865baffb; Proxy: null)
Would be really nice, if somebody from the AWS Amplify team wrote at least a simple blog post about it (after almost 3 years, instead of closing tickets/bug reports).
This is a schema that I used:
type Todo @model {
id: ID!
name: String!
description: String
priority: String
}
type Query {
batchFetchTodo(ids: [ID]): [Todo]
}
@majirosstefan I haven't done anything with this for over a year, so I'm surprised (and a little annoyed) that this is still an issue people are struggling with, with no help from the AWS Amplify team...
To use the solution from aws-amplify/amplify-cli#1946, the first thing I would check is that you actually have a data source of type "NONE" in the AppSync UI (AWS AppSync > [YOUR API] > Data Sources). You can call it anything, but the type has to be NONE and the name you choose is the value to reference in the "DataSourceName" field of the "AddEnvVariablesToStash" object. I just so happened to call my data source of type NONE... NONE.
From memory (and I apologise if I'm remembering incorrectly), I think you need to create the NONE data source in the AWS AppSync UI (AWS AppSync > [YOUR API] > Data Sources > Create data source). You might be able to do it programmatically, but I have a feeling that at the time I read something about creating it in the UI.
I hope that helps. I know how frustrating this issue is. It's probably all I can do to help unfortunately because it's been so long and my brain is shielding me from the trauma.
@jonperryxlm Thanks for the reply and suggestions (it helped).
I figured it out just a few minutes ago (I also needed to re-deploy API, because seems like Appsync vs local stack got out of sync and it was throwing quite strange errors during deployment).
I am currently writing that missing blog post so I would not forget it as my brain works similarly to yours (I mean trauma shielding thing).
I will post the link in this comment later.
Link: https://stefan-majiros.com/blog/custom-graphql-batchgetitem-resolver-in-aws-amplify-for-appsync/
NOTE: look into adding env details into stash per aws-amplify/amplify-category-api#408
Is there any update on this?
I'd like to add another issue where support for environment variables is missing:
The IAM authorization for AppSync requires to add all allowed roles or usernames to the custom-roles.json
:
{
"adminRoleNames": ["my-iam-role-dev", "my-iam-role-prod"]
}
These roles will be copied and hardcoded into the generated auth resolvers:
#if( $util.authType() == "IAM Authorization" )
#set( $adminRoles = ["my-iam-role-dev", "my-iam-role-prod"] )
#foreach( $adminRole in $adminRoles )
#if( $ctx.identity.userArn.contains($adminRole) && $ctx.identity.userArn != $ctx.stash.authRole && $ctx.identity.userArn != $ctx.stash.unauthRole )
#return($util.toJson({}))
#end
#end
#if( ($ctx.identity.userArn == $ctx.stash.authRole) || ($ctx.identity.cognitoIdentityPoolId == "eu-west-1:..." && $ctx.identity.cognitoIdentityAuthType == "authenticated") )
#set( $isAuthorized = true )
#end
#end
The IAM roles contain the environment variable dev
or prod
and we have currently no pssobility to replace this value with the correct Amplify env. It would be good to have support for the ${env}
syntax that is already being supported for function resolver: https://docs.amplify.aws/cli/graphql/custom-business-logic/#reference-amplify-environment-name
The alternative presented in this issue with a pipeline resolver to add the env to ctx.stash.env
and then referencing it in the IAM role as ${ctx.stash.env}
might work, but requires to overwrite every single resolver.
Any updates on this? This is serious limitation. The whole point of amplify is to simplify app-development. I have a custom resolver for TransactWriteItems
that I'll have to maintain manually until there's a fix. This issue opened over 3 years ago. Please provide a remedy 😞
Not sure if this helps anyone but I needed to know the full table names in the deployed environment and solved it by using override.ts to create a map of all model names to table names and inserting that into all of my resolvers. It's not elegant and far from optimal but it did unblock me from having to manually update every resolver every time I pushed up new changes.
The way this looked was something like:
$util.qr($ctx.stash.put("tableNames", ${JSON.stringify(tableNameMap)}))
2
into that array, and then join the array back into a string.models[modelName].resolvers[resolverName].requestMappingTemplate = newRequestMappingTemplate;
The end result is i could do something like "table": $ctx.stash.tableNames.{tableName}
for all my tables in my TransactWriteItems
operations.
I would also much appreciate the feature to pass environment variables into VTL, just as with Lambda. At the moment I'm stuck. I'm using CDK for the build. The only possibility I see is using Lambda, which is slower.
In my case I'm building an AppSync JS Function connected to a Http data source which in this case is a Step Function. We're using CDK and we have Sandbox, Dev, and Prod accounts. We need to pass a different Step Function ARN to the JS code for each one and currently we have to inline the code as a string to inject a different variable. It looks messy, it's error prone and not easily tested. It would be good to be able to pass env variables to the JS function like we do with Lambda functions.
Another use case is updating an OpenSearch index (create new index -> reindex documents) and moving traffic to this new index without downtime. Without the ability to use environment variables, we need to redeploy
EDIT: use an alias instead
Another use case is when using an HTTP datasource, like for publishing SNS messages, in which case the AppSync resolver or function needs the SNS Topic ARN.
+1. Inline code option seems about the only option, and its rather gross. env vars or some kind of build time config please!
Adding a Terraform workaround for those who don't want to go down the pipeline resolver route.
Defining a UNIT JS resolver in Terraform uses the code
argument to pass the filename that contains the resolver logic.
Using a templatefile
function, instead of the documented file
function, allows for template syntax and string substitution.
So the request function can look like:
operation: "BatchGetItem",
tables: {
"${table_name}": {
keys: ctx.args.id.map((id) => util.dynamodb.toMapValues({ id})),
consistentRead: true,
},
},
And the Terraform resource can use the resource path to dynamically set the table name (e.g aws_dynamodb_table.table.name):
code = templatefile("code-directory", {table_name = aws_dynamodb_table.table.name })
AppSync just released support for Environment Variables: https://docs.aws.amazon.com/appsync/latest/devguide/environmental-variables.html
The feature is live and the Cloudformation docs will be updates shortly.
So, how do you use it in a resource.ts
file in Amplify data? How do you pass the table name to the javascript custom resolver?
File resource.ts
:
Item: a
.model({
name: a.string(),
})
.authorization((allow) => [allow.guest(), allow.authenticated()]),
itemList: a
.query()
.arguments({
items: a.string().array(),
/* should I pass an extra argument in here? How do I get the table name? */
})
.returns(a.ref("Item").array())
.handler(
a.handler.custom({
dataSource: a.ref("Item"),
entry: "./item-list.js",
})
)
.authorization((allow) => [allow.authenticated()]),
File item-list.js
:
export function request(ctx) {
return {
operation: "BatchGetItem",
tables: {
TABLE_NAME: { /* How do I get the right name in here? */
keys: ctx.args.items.map((id) => util.dynamodb.toMapValues({ id })),
consistentRead: true,
},
},
};
}
export function response(ctx) {
if (ctx.error) {
util.error(ctx.error.message, ctx.error.type);
}
return ctx.result.data.TABLE_NAME;
}
this is so frustrating, why is so complicated to do a BatchGeleteItem using amplify gen 2? How do I get the table name if I already set the dataSource ? please someone help
Is this is resolver for only one table? If so, why don't you hardcode it in there? If the name differs per environment you can do an if else to set the table name.
What I am doing, is usinng CDK and when create my resolvers I inject my tables names at the the top of the resolver file so they are available as variables in the resolver on runtime and I can reference them. Not sure if you use pregenerated amplify code it is as easy?
Yes, but when you don't use CDK, there is no way to know the table name when using Amplify...
A simple workaround for Amplify Gen 2 users:
File backend.ts
:
const backend = defineBackend({
auth,
data,
storage,
});
backend.data.resources.cfnResources.cfnGraphqlApi.addPropertyOverride(
"EnvironmentVariables",
{
API_ID: backend.data.apiId
}
);
Then you can use ctx.env.API_ID
inside your resolver function:
export function request(ctx) {
return {
operation: "BatchPutItem",
tables: {
[`YourTableName-${ctx.env.API_ID}-NONE`]: [item1, item2, ....]
},
};
}
Is your feature request related to a problem? Please describe. When creating a BatchPutItem template, I have to provide the table name for the batch operation. Since I create my table via Cloudformation and with an ENV value attached, I cannot use the BatchPutItem since I don't have access to the current Environment value in the resolver template.
A workaround I am using right now is the first call a Lambda in a pipeline resolver and passing on the Environment value from the first function to the second that does the batchPutItem. However, this is kinda unnecessary and requires an extra lambda call while the ENV value should be available during runtime. It looks like it is just not exposed via $ctx.
Describe the solution you'd like Expose the environment value in the $ctx object.