aws-amplify / amplify-category-api

The AWS Amplify CLI is a toolchain for simplifying serverless web and mobile development. This plugin provides functionality for the API category, allowing for the creation and management of GraphQL and REST based backends for your amplify project.
https://docs.amplify.aws/
Apache License 2.0
88 stars 76 forks source link

Is there a way to specifiy thedynamodb tablename when creating a api resource using amplify? #439

Open djkingh01 opened 5 years ago

djkingh01 commented 5 years ago

Note: If your question is regarding the AWS Amplify Console service, please log it in the official AWS Amplify Console forum

Which Category is your question related to? amplify

What AWS Services are you utilizing? app sync

Provide additional details e.g. code snippets

djkingh01 commented 5 years ago

Currently whenever I try to create resources using amplify it concatenates a long randon string? How can this be prevented?

UnleashedMind commented 5 years ago

You can specify the resource names that are prompted during the execution of the add command. Others are auto generated for you and you are not advice change them because they may be referenced by other resources. The CLI uses long random string to prevent accidental name clashes.

djkingh01 commented 5 years ago

In my understanding the resources that are prompted during add command execution are for Project name and env. This has little to do with generated tablenames so it does not solve my problem. Generated tablenames are created from the typenames in graphql schema. We have a usecase where we are storing the same table for storing multiple types. How can achieve that using amplify?

Additionally when we deploy to production we want to control our tablenames.

mikeparisstuff commented 5 years ago

@djkingh01 Currently the @model directive expects to contain a single data type, however, we are looking at generlizing this to allow overloaded indexes & multiple types per table but there is not set date on when this will be available. This being said, you are able to write your own resolvers that target your own data sources within the CLI using custom stacks.

The process to do this would be to

  1. Create a DynamoDB Table resource within a custom CloudFormation stack.
  2. Create a scoped down IAM role that has access to your table and it's indexes as well as a trust policy for appsync.amazon.com.
  3. Create an AppSync Datasource resource that points to your IAM role and DynamoDB table.
  4. Add resolvers that target your Datasource. These resolvers can return whatever type of data is stored in your DynamoDB table.

See https://aws-amplify.github.io/docs/cli/graphql#api-category-project-structure for more info.

djkingh01 commented 5 years ago

I have a follow up question as I was trying out the solution you suggested. If I did follow your solution will I be able to declare entities using schema.graphql and auto generate any code or will I have to do everything manually?

mikeparisstuff commented 5 years ago

@djkingh01 For now, yes, you would have to implement everything yourself. We do want to support the use case where you can adopt an existing table for an @model type, but it has not yet been implemented. We have looked at a few approaches including expanding @model with new arguments as well as adding a new directive such as @table. If you are interested and want to contribute a potential design, I would be happy to review it.

P.S. We have also discussed opening up the CLI such that you can write and run your own transformers as part of the normal workflow. I'm curious how many people would want this feature?

djkingh01 commented 5 years ago

I believe a lot of people should be interested. My usecase is such that we have an existing DynamoDB and code for CRUD with a pk/sk hierarchichal access pattern in a single table. Amplify is a great way to easily migrate out backend to graphql and reduce testing footprint. All we need is @sk and @datasource support in someway. @sk so that the generated resolvers use a @pk/@sk before using queries. @datasource - I can manually define or use custom cloudformation template. Then use amplify to define and author the schema and go live. I am sure I am not the only one with this problem. In its current form the system only works for people who have new projects and that too I am not sure how many can have tables in their production with randomely generated tablenames that they cannoty control.

djkingh01 commented 5 years ago

It took me a long time to get these right so sharing for others benefit.

  1. Below is a Todo schema.graphql file that supports PrimaryKey, SortKey.
  2. Resolvers

`type Todo { pk: String! sk: String! displayName: String notes: String }

input TodoKey { pk: String! sk: String! }

input CreateTodoInput { pk: String! sk: String! displayName: String notes: String }

type ModelTodoConnection { items: [Todo] nextToken: String }

input ModelTodoFilterInput { pk: ModelStringFilterInput sk: ModelStringFilterInput displayName: ModelStringFilterInput notes: ModelStringFilterInput and: [ModelTodoFilterInput] or: [ModelTodoFilterInput] not: ModelTodoFilterInput }

input ModelBooleanFilterInput { ne: Boolean eq: Boolean }

input ModelFloatFilterInput { ne: Float eq: Float le: Float lt: Float ge: Float gt: Float contains: Float notContains: Float between: [Float] }

input ModelIDFilterInput { ne: ID eq: ID le: ID lt: ID ge: ID gt: ID contains: ID notContains: ID between: [ID] beginsWith: ID }

input ModelIntFilterInput { ne: Int eq: Int le: Int lt: Int ge: Int gt: Int contains: Int notContains: Int between: [Int] }

enum ModelSortDirection { ASC DESC }

input ModelStringFilterInput { ne: String eq: String le: String lt: String ge: String gt: String contains: String notContains: String between: [String] beginsWith: String }

type Mutation { createTodo(input: CreateTodoInput!): Todo updateTodo(input: UpdateTodoInput!): Todo deleteTodo(input: TodoKey!): Todo }

type Query { getTodo(query: TodoKey!): Todo listTodos(query: ModelTodoFilterInput, filter: ModelTodoFilterInput, limit: Int, nextToken: String): ModelTodoConnection }

type Subscription { onCreateTodo: Todo @aws_subscribe(mutations: ["createTodo"]) onUpdateTodo: Todo @aws_subscribe(mutations: ["updateTodo"]) onDeleteTodo: Todo @aws_subscribe(mutations: ["deleteTodo"]) }

input UpdateTodoInput { pk: String sk: String displayName: String notes: String }`

a. CreateTodo `## [Start] Prepare DynamoDB PutItem Request. ** $util.qr($context.args.input.put("createdAt", $util.time.nowISO8601())) $util.qr($context.args.input.put("updatedAt", $util.time.nowISO8601())) $util.qr($context.args.input.put("__typename", "Todo")) { "version": "2017-02-28", "operation": "PutItem", "key": { "pk": $util.dynamodb.toDynamoDBJson($util.defaultIfNullOrBlank($ctx.args.input.pk, $util.autoId())), "sk": $util.dynamodb.toDynamoDBJson($util.defaultIfNullOrBlank($ctx.args.input.sk, $util.autoId())) }, "attributeValues": $util.dynamodb.toMapValuesJson($context.args.input), "condition": { "expression": "attribute_not_exists(#pk) and attribute_not_exists(#sk) ", "expressionNames": { "#pk": "pk", "#sk": "sk" } } }

[End] Prepare DynamoDB PutItem Request. **`

b. DeleteTodo `#if( $authCondition )

set( $condition = $authCondition )

$util.qr($condition.put("expression", "$condition.expression AND attribute_exists(#pk) AND attribute_exists(#sk) ")) $util.qr($condition.expressionNames.put("#pk", "pk")) $util.qr($condition.expressionNames.put("#sk", "sk"))

else

set( $condition = {

"expression": "attribute_exists(#pk) and attribute_exists(#sk) ", "expressionNames": { "#pk": "pk", "#sk": "sk" } } )

end

if( $versionedCondition )

$util.qr($condition.put("expression", "($condition.expression) AND $versionedCondition.expression")) $util.qr($condition.expressionNames.putAll($versionedCondition.expressionNames))

set( $expressionValues = $util.defaultIfNull($condition.expressionValues, {}) )

$util.qr($expressionValues.putAll($versionedCondition.expressionValues))

set( $condition.expressionValues = $expressionValues )

end

{ "version": "2017-02-28", "operation": "DeleteItem", "key": { "pk": $util.dynamodb.toDynamoDBJson($ctx.args.input.pk), "sk": $util.dynamodb.toDynamoDBJson($ctx.args.input.sk) }, "condition": $util.toJson($condition) }`

c. Update Todo `#if( $authCondition && $authCondition.expression != "" )

set( $condition = $authCondition )

$util.qr($condition.put("expression", "$condition.expression AND attribute_exists(#pk) AND attribute_exists(#sk)"))

$util.qr($condition.expressionNames.put("#pk", "pk")) $util.qr($condition.expressionNames.put("#sk", "sk"))

else

set( $condition = {

"expression": "attribute_exists(#pk) and attribute_exists(#sk) ", "expressionNames": { "#pk": "pk", "#sk": "sk" }, "expressionValues": {} } )

end

Automatically set the updatedAt timestamp. **

$util.qr($context.args.input.put("updatedAt", $util.time.nowISO8601())) $util.qr($context.args.input.put("__typename", "Todo"))

Update condition if type is @versioned **

if( $versionedCondition )

$util.qr($condition.put("expression", "($condition.expression) AND $versionedCondition.expression")) $util.qr($condition.expressionNames.putAll($versionedCondition.expressionNames)) $util.qr($condition.expressionValues.putAll($versionedCondition.expressionValues))

end

set( $expNames = {} )

set( $expValues = {} )

set( $expSet = {} )

set( $expAdd = {} )

set( $expRemove = [] )

foreach( $entry in $util.map.copyAndRemoveAllKeys($context.args.input, ["pk","sk"]).entrySet() )

if( $util.isNull($entry.value) )

#set( $discard = $expRemove.add("#$entry.key") )
$util.qr($expNames.put("#$entry.key", "$entry.key"))

else

$util.qr($expSet.put("#$entry.key", ":$entry.key"))
$util.qr($expNames.put("#$entry.key", "$entry.key"))
$util.qr($expValues.put(":$entry.key", $util.dynamodb.toDynamoDB($entry.value)))

end

end

set( $expression = "" )

if( !$expSet.isEmpty() )

set( $expression = "SET" )

foreach( $entry in $expSet.entrySet() )

#set( $expression = "$expression $entry.key = $entry.value" )
#if( $foreach.hasNext() )
  #set( $expression = "$expression," )
#end

end

end

if( !$expAdd.isEmpty() )

set( $expression = "$expression ADD" )

foreach( $entry in $expAdd.entrySet() )

#set( $expression = "$expression $entry.key $entry.value" )
#if( $foreach.hasNext() )
  #set( $expression = "$expression," )
#end

end

end

if( !$expRemove.isEmpty() )

set( $expression = "$expression REMOVE" )

foreach( $entry in $expRemove )

#set( $expression = "$expression $entry" )
#if( $foreach.hasNext() )
  #set( $expression = "$expression," )
#end

end

end

set( $update = {} )

$util.qr($update.put("expression", "$expression"))

if( !$expNames.isEmpty() )

$util.qr($update.put("expressionNames", $expNames))

end

if( !$expValues.isEmpty() )

$util.qr($update.put("expressionValues", $expValues))

end

{ "version": "2017-02-28", "operation": "UpdateItem", "key": { "pk": { "S": "$context.args.input.pk" }, "sk": { "S": "$context.args.input.sk" }

}, "update": $util.toJson($update), "condition": $util.toJson($condition) }`

d. ListTodo with support for queries `## Using limit with filter expression can cause issues because imp. Dynamodb evaluates filter after limit. So filter could return 2 results that do not match filter expression#

set( $limit = $util.defaultIfNull($context.args.limit, 10) )

{ "version": "2017-02-28", "operation": "Query", "query" : #if( $context.args.query ) $util.transform.toDynamoDBFilterExpression($ctx.args.query)

else

    null
#end,
"filter":   #if( $context.args.filter )
    $util.transform.toDynamoDBFilterExpression($ctx.args.filter)
#else
    null
#end,
"limit": $limit,
"nextToken":   #if( $context.args.nextToken )
    "$context.args.nextToken"
#else
    null
#end

}`

e. GetTodo { "version": "2017-02-28", "operation": "GetItem", "key": { "pk": $util.dynamodb.toDynamoDBJson($ctx.args.query.pk), "sk": $util.dynamodb.toDynamoDBJson($ctx.args.query.sk) } }

djkingh01 commented 5 years ago

I created 1 stack for the custom table, datasource and role? How do I point to this stack from another stack where the schema definitions are created?

timoteialbu commented 5 years ago

@mikeparisstuff

Currently the @model directive expects to contain a single data type, however, we are looking at generlizing this to allow overloaded indexes & multiple types per table but there is not set date on when this will be available. This being said, you are able to write your own resolvers that target your own data sources within the CLI using custom stacks.

My team is very interested in this feature. We've recently had a Well Architected Review with an AWS Solutions Architect and our AWS rep and one of the issues that came out of that was that we should consolidate all our types into one big table. We were told that was the proper usage of DynamoDB. I know you guys build features based on demand, so I am just commenting to vote on it.

Thanks for all your hard work!

hisham commented 5 years ago

+1. I'd like to use the auto-generated code but against a different table than the amplify cli default as well. This is needed to do https://github.com/aws-amplify/amplify-cli/issues/1037 and aws-amplify/amplify-category-api#453.

Would also like an option to create the table in a different region than the amplify default.

clinicalinkgithub commented 5 years ago

@djkingh01 For now, yes, you would have to implement everything yourself. We do want to support the use case where you can adopt an existing table for an @model type, but it has not yet been implemented. We have looked at a few approaches including expanding @model with new arguments as well as adding a new directive such as @table. If you are interested and want to contribute a potential design, I would be happy to review it.

P.S. We have also discussed opening up the CLI such that you can write and run your own transformers as part of the normal workflow. I'm curious how many people would want this feature?

@mikeparisstuff I am interested in the @table directive (specifying a current table for @model..to support the best-practices of single table desing). Single Table is "multiples types per table" design,

  1. Are you any further on this design? can I get a beta version? 2.) Are you openinging up the CLI to custome transformers/directives?
clinicalinkgithub commented 5 years ago

@djkingh01 For now, yes, you would have to implement everything yourself. We do want to support the use case where you can adopt an existing table for an @model type, but it has not yet been implemented. We have looked at a few approaches including expanding @model with new arguments as well as adding a new directive such as @table. If you are interested and want to contribute a potential design, I would be happy to review it.

P.S. We have also discussed opening up the CLI such that you can write and run your own transformers as part of the normal workflow. I'm curious how many people would want this feature?

@mikeparisstuff I am interested in the @table directive (specifying a current table for @model..to support the best-practices of single table desing). Single Table is "multiples types per table" design,

Are you any further on this design? can I get a beta version? 2.) Are you openinging up the CLI to custome transformers/directives?

romislovs commented 4 years ago

+1

andreikrasnou commented 4 years ago

+1

jon144 commented 4 years ago

+1

dtelaroli commented 4 years ago

+1

mnishiguchi commented 4 years ago

+1

alexkates commented 4 years ago

+1

EphemeralX commented 4 years ago

+1

givenm commented 4 years ago

+1

justinsamuel92 commented 4 years ago

+1

akash-jose commented 4 years ago

+1

MehdiTAZI commented 4 years ago

+1 I think it will be pretty useful if we can specify the table name.

vladimirpekez commented 4 years ago

+1 is this released yet? :)

Hernanm0g commented 4 years ago

+1

castri1 commented 4 years ago

+1

MauriceWebb commented 4 years ago

+1 🔥🔥🔥 !!

Sparkboxx commented 3 years ago

+1

wedwards-inirv commented 3 years ago

+1

simpian commented 3 years ago

+1

vuchau commented 3 years ago

+1

ohing504 commented 3 years ago

+1

danfreid commented 3 years ago

+1

skryshi commented 2 years ago

+1

DiegoMcDipster commented 2 years ago

A possible workaround I've come across is Override (https://docs.amplify.aws/cli/graphql/override/#customize-amplify-generated-appsync-graphql-api-resources)

So, do an:

 amplify override api

Update the tablename in the override.ts as:

resources.models["Todo"].modelDDBTable.tableName = "tod"; 

That created a table named 'tod' (without the randomly generated string attached to it). I tested the mutations and subscriptions via the console and it works great!

Unfortunately, for me it didn't solve my problem because I already have an existing table for the api and when I run amplify push it failed. With the error: already exists.

But I thought I'd put my findings here as it could help others that want to rename the table.

mciobanu commented 2 years ago

An option for using custom names for new DynamoDB tables is cdk-appsync-transformer.

This project replaces the Amplify CLI with CDK, so it's a pretty big change if you don't use it already.

espetro commented 1 year ago

+1

MurkyMarc commented 1 year ago

+1

beto4812 commented 6 months ago

+1