aws-amplify / amplify-cli

The AWS Amplify CLI is a toolchain for simplifying serverless web and mobile development.
Apache License 2.0
2.81k stars 819 forks source link

RFC: Importing Existing AWS Resources to Amplify Project using the Amplify CLI #3977

Open kaustavghosh06 opened 4 years ago

kaustavghosh06 commented 4 years ago

Currently, the CLI provisions new AWS resources based on the categories that you add to your Amplify project. This RFC is to gauge community interest and hear more thoughts around the CLI being able to import existing resources into an Amplify project.

As an MVP, we’re planning to support importing of resources in the following categories:

We’ll be relying heavily on the new CloudFormation import functionality - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import-existing-stack.html for this feature

Auth

Storage

API

GraphQL API

REST API

Please comment on this thread if you have some thoughts or suggestions on this feature or if you think we’re missing any story points which you would love to see as a part of this feature.

andrewbtp commented 4 years ago

How would this play out across environments? If I add a bucket im-a-bucket to env test, if I create a new env prod, does it hit the same bucket or create a new bucket?

Though I hopefully won't need it, I'm a big fan of this RFC.

hisham commented 4 years ago

Will you be able to add resources in other regions or will it be limited as same data center as the amplify env?

jagadishallakanti commented 4 years ago

It would be amazing if we can use same cognito user pool for two different application like for example one app for back office management and other app for front office users. In this way security is not compromised or accidental release happen

TLaue commented 4 years ago

We are currently already using AWS Amplify with existing resources (e.g. Cognito User Pool) by adding manually created config files which are included by our CI/CD pipeline depending on the relevant stage. It works but I would really prefer a way which is natively supported by AWS Amplify CLI.

Furthermore I would really love to include existing AppSync apis with support for Amplify DataStore even though this might be a bigger task.

RossWilliams commented 4 years ago

I’m not in favour of this feature, and I’d rather see this supported through solid documentation and exposing users to a bit of plumbing.

Main reasons:

cesiztel commented 4 years ago

I am really happy that you guys are considering this feature 🔥. In short, my opinion is that this feature can increase the number of projects and developers which adopt AWS Amplify. My reasons are:

The CLI will be probably more complex. Also, the CLI will need to manage existing and new services at the same time. But it is a "downside" that it worth to assume.

asmajlovicmars commented 4 years ago

This would be perfect, as long as it's robust enough to never delete the existing Auth if there's a reference to it from another project. We're planning to add several applications around a single Cognito User Pool, which serves as SSO, and being able to attach the existing Auth is exactly what we need. I second the question how would this work across different envs?

Johnniexson commented 4 years ago

Yea, this will be very useful for me, and will ease the stress of manually setting the config file to use existing resources. 👍

BeaveArony commented 4 years ago

Wonderful news!

It took me quite some time in January to include an existing UserPool in a project in my company. The cloud is managed by terraform and setup manually. Every environment is in a different account and I do not have permissions to use amplify push. The people in charge are very skeptical about this kind of automation and are afraid, that Amplify would delete or change something, that other parts of the software rely on.

I started with creating the UserPool Client for web and the identity pool in terraform, but soon realized that this was too much work to transform all the cloudformation templates to another format. With every Amplify update, I would also have to check if something changed, that would be a nightmare!

Note:

Before I could do anything with the Amplify CLI, I had to configure the CLI with an external AWS-Account where I had all permissions to. It turned out to be a good testing ground for experiments without committing the changes to git. This gave me the /amplify folder I needed to check into version control.

My solution so far:

  1. Create a GraphQL API and have it also create the UserPool, the UserPool Client, the IdentityPool, and so on...
  2. Create a custom resource as described in the docs and copy the automatically created cloudformation template over and adjust the parameters.json to include the existing UserPool ARN (this is automated in the build pipelines to get the proper ARN)
  3. Go through the template and delete the UserPool creation part, and update all the mentions of the UserPool ARN with the configured input parameter, which is quite tedious and error prone
  4. Remove the Auth category before pushing or committing the source code
  5. Change the GraphQL API in cloudformation so it points to the new custom resource
  6. Repeat these steps for other categories that depend on the Auth category, ie. Storage

In build pipelines:

  1. Get the UserPoolArn with the TerraForm CLI and write it to some config file so it can also be used by the frontend build during a later build step
  2. Write the extracted value in the auth parameters.json file
  3. Create a custom aws_config.js file with a modified pushAmplify.sh script

The Resulting Project

There are probably several steps I did not mention, but I ended up with Amplify CLI managing everything except the UserPool, including a new UserPoolClientForWeb and IdentityPool. Every environment is in a different account, so in the team-provider-info.json every ENV is in a different AmplifyAppId and if the CI/CD pipeline adds or changes something in this file, it needs to be committed back to the repo or otherwise there will be a new Amplify App created with every build!

Working with this kind of setup

Every change to the auth part is a huge pain! If I want to add IAM autentication to the AppSync API for example, I would need to try it out in another project and compare the CFN templates to see what changed. Amplify is under heavy development and is adding features and fixing bugs all the time. The generated templates change quite heavily sometimes, which is a good thing, they are auto generated, but for me I just want to use Amplify because I do not want to write CloudFormation or TerraForm! So this RFC would lift a lot of weight off my shoulders!

Create the initial Amplify App with CFN?

I think if I would start over, I would explore to create the Amplify App with either a CloudFormation template, the aws-cli (not amplify-cli) or some other InfrastructureAsCode tool and see how to progress from there...

Protect existing resources

I'm concerned about having the CLI modify or accidentally delete the existing UserPool. Even changing the triggers might overwrite some existing trigger and cause some grieve for the developer responsible for the change. So maybe start with only adding resources like the UserPoolClient and IdentityPool and wire them to the other Amplify categories, but leave the UserPool alone? I think the people in charge wouldn't mind seeing UserPoolClients created for every new Amplify project as long as the existing parts are protected. Managing triggers by Amplify is a very good feature though!

Export resources

There is also some additional complexity when Amplify also needs to export created resources. One scenario I'm facing is to give the AppSync API URL some pretty company sub-domain name. There is an existing TerraForm module that makes it easy, but I would need to figure out how to best fetch the URL and create a new TerraForm plan that is run after the Amplify deploy process. (The first plan only imports existing resources)

TAGS

Also, please add the option to specify tags for the root stack, so that every resource created by the Amplify CLI gets them automatically applied! Please! My current workaround for this is to redeploy the root stack with the aws-cli adding the --tags parameter. This is done after every amplify push during CI/CD.

Parting words

Anyways, I got into quite some details, so you could hopefully understand one developers needs when it comes to integrating an Amplify App into an existing AWS infrastructure. So every progress in that area is highly anticipated by me!

vdumouchel commented 4 years ago

Hello guys!

Had a mid size project on amplify pushing to dynamo DB tables. Had to play in my graphql schema to change connections between types and this introduced a secondary index error on a table upon doing amplify push... tried reverting back by pushing the schema I had prior to my connections modifications instead and now I have big rollback errors many tables in dynamo db. Hard to know what is the problem and impossible to debug. So I feel I'm left with either starting from scratch (losing everything in the appsync endpoint).... but I would very much like to possibly start a new amplify project and just connect my previous auth pool + previous appsync endpoint and I guess start a new dynamo DB to get by to the previous state of my project so the appsync endpoint pushes stuff back in the new DB to seed it as if nothing ever happened!

If you guys have other ways right now to revert changes made to my dynamo db to get past the errors on Amplify push I'm all ears!

RossWilliams commented 4 years ago

@vdumouchel

Hard to know what is the problem and impossible to debug.

You likely have a GSIs created on dynamo tables from the first push failure. When a GSI change fails in DynamoDB, CloudFormation will not roll it back for you, it must be done manually. You need to use the API or console to remove the created GSIs so that you tables match your last-known good schema. Then be careful and make sure each push only changes 1 GSI per table going forward. This feature being discussed is not likely to be a good long-term solution to your situation.

BeaveArony commented 4 years ago

@vdumouchel @RossWilliams is spot on. Do NOT push changes where you change more than one @connection or @key.

I would suggest to always only experiment in a different environment, but make sure to only incrementally merge the changes to the main branch.

Renaming trick

One 'trick' I use when I get into this situation and don't want to create a new environment is to simply rename all those @model types! This will delete the old Dynamo tables and create the new ones with all those changes you want. If the data is important you can make a backup, ie. in the Management Console. Once you merge this schema with the renamed @model types to every other branch/env you can rename them back and push them to every branch/env again. Then restore the backup or repopulate the data somehow.

I haven't tried the backup part yet, so I cannot help you with that.

Johnniexson commented 4 years ago

@vdumouchel @RossWilliams is spot on. Do NOT push changes where you change more than one @connection or @key.

I would suggest to always only experiment in a different environment, but make sure to only incrementally merge the changes to the main branch.

Renaming trick

One 'trick' I use when I get into this situation and don't want to create a new environment is to simply rename all those @model types! This will delete the old Dynamo tables and create the new ones with all those changes you want. If the data is important you can make a backup, ie. in the Management Console. Once you merge this schema with the renamed @model types to every other branch/env you can rename them back and push them to every branch/env again. Then restore the backup or repopulate the data somehow.

I haven't tried the backup part yet, so I cannot help you with that.

Hmmm... This is a dope idea, i will give it a try also

danrivett commented 4 years ago

I love this RFC, I'm sure I'll have additional thoughts, but the first two things that spring to mind are:

  1. When specifying existing names for things like DynamoDB tables, it would be good to support an env placeholder so that it can vary dynamically per environment if desired
    1. e.g. I could specify notes-${env} and it would resolve to notes-dev, notes-prod or I could specify just notes and it would be used across all environments
  2. Just an initial thought but I would think it is safest to have a hard lock on deleting any existing resources referenced.
    1. e.g. it should be able to distinguish between resources created, and existing resources referenced, per environment, and removing an environment would only remove the automatically created resources and leave the existing resources behind.
apoorvmote commented 4 years ago

I want to collect emails with Hugo static website. Then manage email list with react app and send bulk email with SES and lambda. I have created another issue aws-amplify/amplify-cli#4175 with more details.

cyrfer commented 4 years ago

Maybe there is a comment above that elaborates, but based on the heading I am worried "support for USING existing resources" (created by whatever process, including another Cloudformation stack) might get mixed in with "import existing resources" which implies leveraging Cloudformation's recent support for importing resources that were not created by a CF stack.

If a resource was created by another CF stack, I don't want to modify that resource. Instead I think the CF templates and CLI interview should allow users to provide existing resources to be USED (not only IMPORTED).

I made a feature request for "support for existing resources" over here. I think it's notable that for some cases (mine) it is currently supported! And I don't want it be be broken. :) https://github.com/aws-amplify/amplify-cli/issues/4197

sw33tr0ll commented 4 years ago

Please add this please! Thank you, great idea

diegorey commented 4 years ago

This would be amazing for working with multiple apps within the same company. It would be great to add functions directed to an existing table and/or AppSync API. Thanks for the great work! PS: Are there examples in the documentation of accessing existing AppSync APIs through Lambda?

arckdash commented 4 years ago

I believe I managed to solve as I described it in this thread: https://github.com/aws-amplify/amplify-js/issues/4704#issuecomment-633189931

Alk3m1st commented 4 years ago

Also adding my support for this feature. Similar to @cyrfer my use case is around being able to use / link Amplify to existing resources for easy and seamless integration rather than importing them for control by the Amplify App. I'm most interested in Auth, allowing a Cognito instance to be used by multiple apps but only one in control of it (or perhaps not at all where managed externally). Use of existing DynamoDB tables by API (AppSync) or S3 buckets by storage would also find many uses. I love that Amplify still allows delving into the CloudFormation templates for customisation and @RossWilliams makes a good point about stronger documentation here but I enjoy the power of the Amplify CLI to perform most tasks without needing to do so. For example the Lambda triggers were really useful to me recently for easily using DynamoDB streams to an existing table (outside Amplifys control).

JamieSlome commented 4 years ago

Apologies if I am duplicating advice written above. I have not had the chance to read through all of it.

It would appear that creating an entirely new resource (Lambda function, for example) through the CLI alongside an already existing reference (i.e. an AppSync API) and executing amplify push will force the backend environment to now reference your already existing API resource.

Of course, you can go ahead and delete the Lambda or temporary resource after.

Cheers 🍰

frankoid commented 4 years ago

We have a largish (9000 lines of cloudformation for the appsync not including the lambdas that it calls) AppSync API that we'd like to be able to develop/debug locally. At the moment we have to deploy all changes to AWS before we can test them which means we have a slow feedback loop (especially since we do the deployments via a pipeline we've built, so it requires a git commit and a minute or two's wait plus the ~5 minutes for the AppSync CloudFormation to deploy).

We'd love to be able to develop locally so if Amplify could import existing cloudformation templates for AppSync and Lambdas that would be great.

A bit more info about our stack: we have a React web app and an AppSync API backed by lambdas and also some direct AppSync->DynamoDB access. The VTL in our AppSync API is quite complex (possibly too complex, but simplifying it would involve compromises/downsides).

the1adi commented 4 years ago

When will this be available?

r0zar commented 4 years ago

I like the sound of this!

I have a specific use case I'm curious if this will help with...

Often after running amplify add auth, I'll have my auth backend setup, but eventually will run into some requirements that are not possible with the CLI, and I'm not sure how to setup directly with cloudformation.

For those 1 off configuration changes done through the AWS UI, it would be really nice to be able to sync/import them back into my /amplify folder as templates. Both for learning purposes, and project cleanliness as well.

Thanks!

jkeys-ecg-nmsu commented 4 years ago

This RFC is a major feature that otherwise drives us to manage our own stacks. Usage patterns and requirements change over time -- the problem being that the benefits of the Amplfy CLI quickly turn into negatives in that scenario.

Adding functionality for importing existing resources developed "on the side" is a major feature in favor of retaining Amplify CLI vs "ejecting" (in Facebook React nomenclature) the Cloudformation templates and managing infrastructure on our own. The other major feature is the GraphQL transformer, which is just enormously beneficial but underdeveloped. (See outstanding issues regarding functionality: aws-amplify/amplify-cli#2567 aws-amplify/amplify-category-api#368 aws-amplify/amplify-category-api#365 -- and those are just my requests). This was maybe always possible with CustomResources, but frankly that feels underbaked and counter to the idea of Amplify (minimal config). It's an escape hatch that shouldn't be used.

This comment maybe belongs in an issue, but I would be interested on Amplify developers' recommendations for users who have outgrown (in complexity) the use of Amplify CLI, and recommended migration and development patterns.

RandomEngy commented 4 years ago

On the GraphQL side, I would like to be able to support any AppSync API, no matter what storage is used. I am using an AppSync API backed by an RDB, added via amplify add codegen --apiId xxxxxxxx. Right now this is impossible to use with Amplify! You have to use the old AWS SDK in this case.

Yet the documentation in the AWS AppSync Console still directs you to Amplify.

This wouldn't normally be a huge problem and I would use the old SDK but I am running into a show-stopping auth issue there and nobody is helping.

edwardfoyle commented 4 years ago

Hey @RandomEngy what auth issue are you running into with the SDK?

RandomEngy commented 4 years ago

@edwardfoyle https://github.com/aws-amplify/aws-sdk-android/issues/2059 . I get SIGNED_OUT_USER_POOLS_TOKENS_INVALID quite often after the 1 hour token expires.

curtismorte commented 4 years ago

Having worked through actually integrating an existing application into the Amplify CLI, here were the pitfalls of where the Amplify CLI came up short:

  1. DynamoDB tables that we needed to create and add keys to were handled perfectly with @model and @key. The problem is, we didn't have a way to map existing Queries and Mutations to the model. If this mapping existed, we would be able to use VTL templates in /amplify/backend/api/NAME/resolvers. So basically as it sits now we have to manually add every resolver to each new environment we created. Total PITA!
  2. Aurora Serverless Postgres Support (because Postgres does a lot of things better than MySQL). See aws-amplify/amplify-category-api#351 for my response on allowing to data sources to be added without Amplify needing to manage data models
  3. Hosting & Cache Settings for Origin Request using Lambda on Edge. We use Lambda on Edge w/ CloudFront Cache Settings for Origin Requests to route users to specific keys within S3 buckets for language specific builds. There doesn't appear to be any way for us to configure this for hosting. While we didn’t try to set this up with the Amplify CLI due the previous sentence, we also found a StackOverflow post that said “I learned that after adding a Route 53 managed custom domain to my Amplify App via the Amplify Console, an AWS Managed CloudFront distribution was automatically created. This CloudFront distribution is not visible within your account and cannot be directly managed by you.“. This poses a problem since we would need to add Lambda on Edge to the CloudFront distribution and it appears we can't.

I'm working through @functions right now and will report back with any shortcomings.

EDIT: If you have an existing DynamoDB table and existing Query & Mutation definitions, you can have amplify publish your resolvers by adding a definition to /amplify/api/name/stacks/CustomResources.json.

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Description": "An auto-generated nested stack.",
  "Metadata": {},
  "Parameters": {},
  "Resources": {
    "Query{NAME}": {
      "Type": "AWS::AppSync::Resolver",
      "Properties": {
        "ApiId": {
          "Ref": "AppSyncApiId"
        },
        "DataSourceName": "{DYNAMODB_RESOURCE_NAME}",
        "TypeName": "Query",
        "FieldName": "{QUERY_NAME}",
        "RequestMappingTemplateS3Location": {
          "Fn::Sub": [
            "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.{QUERY_NAME}.req.vtl",
            {
              "S3DeploymentBucket": {
                "Ref": "S3DeploymentBucket"
              },
              "S3DeploymentRootKey": {
                "Ref": "S3DeploymentRootKey"
              }
            }
          ]
        },
        "ResponseMappingTemplateS3Location": {
          "Fn::Sub": [
            "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.{QUERY_NAME}.res.vtl",
            {
              "S3DeploymentBucket": {
                "Ref": "S3DeploymentBucket"
              },
              "S3DeploymentRootKey": {
                "Ref": "S3DeploymentRootKey"
              }
            }
          ]
        }
      }
    }
  },
  "Conditions": {},
  "Outputs": {
    "EmptyOutput": {
      "Description": "An empty output. You may delete this if you have at least one resource above.",
      "Value": ""
    }
  }
}
renebrandel commented 3 years ago

⭐️ ⭐️ ⭐️
Hi everyone! We've made some good progress recently on this and would love to get your feedback. We started with Cognito User Pool import, try it with the instructions below.

👉 To install the preview version of the Amplify CLI with this feature, run:

npm install -g @aws-amplify/cli@import

👉 Run this command to import an existing Cognito User Pool in your Amplify project:

amplify import auth

🚀 What this will do:

📺 DEMO video: https://www.twitch.tv/videos/763402046?t=00h52m59s

This CLI preview version demonstrates how it works with the Cognito User Pools. We're going to tackle the rest of the scenarios listed in the original RFC next.

Note: Don't use this for your production environment. This is only a preview version of Amplify CLI!

cyrfer commented 3 years ago

I'm confused about the terms import and link used in the video. To me, those mean very different things. What is the current goal?

I'm passing a userPoolId as a CF stack parameter. Is that what you call a link?

asmajlovicmars commented 3 years ago

Hi @renebrandel , just tested the import auth, and it worked great in a single environment. When I tried to create an additional environment, it still pulled the same original auth, so I'm guessing this feature is still not implemented, but overall this is great from my prospective.

renebrandel commented 3 years ago

I'm confused about the terms import and link used in the video. To me, those mean very different things. What is the current goal?

Good call out. The copy wasn't finalized. This is an "import" feature where the resource will get referenced in the CFn. The actual management of that resource still happens outside of the Amplify project.

Hi @renebrandel , just tested the import auth, and it worked great in a single environment. When I tried to create an additional environment, it still pulled the same original auth, so I'm guessing this feature is still not implemented, but overall this is great from my prospective.

Yeah, we're going to do multi-env support as soon as we wrap up Identity Pool imports. It's on our todo list.

Alk3m1st commented 3 years ago

Will the import work across accounts also?

renebrandel commented 3 years ago

@Alk3m1st - this is only going to support import from within a same account.

renebrandel commented 3 years ago

⭐ ⭐ ⭐ Hi folks - we've officially released Cognito User Pool & Identity Pool imports. Would love to get your thoughts on this. Next up we'll tackle storage imports.

Blog post: https://aws.amazon.com/blogs/mobile/use-existing-cognito-resources-for-your-amplify-api-storage-and-more/ Docs: https://docs.amplify.aws/cli/auth/import ⭐ ⭐ ⭐

rsuresh27 commented 3 years ago

So far I have ran into no problems! Although I do have a question. From my understanding of the documentation, the app client without the secret key is for storing data of the todo list. I just have one Cognito user pool for AWS Amplify Authentication. Whenever I tried adding the Cognito user pool to the Cognito Identity Pool and running the command "Amplify import auth" from the Amplify CLI in the root of my project, it gave me an error saying I needed to have an app client without a secret key. Why is it required to have two app clients: one with a client secret and one without?

AustinZhu commented 3 years ago

Are we going to have commands for importing existing S3 buckets and REST APIs in the future? I am migrating to Amplify so I really need this one. Thank you! P.S.: I know I can configure them manually in my code but it would be better to have it in the backend env in Amplify.

BeaveArony commented 3 years ago

Thanks for this new feature. I’m having the scenario that there is an existing pool with a Secret-App-Client. It’s managed with terraform. So the clientWeb and the IdentityPool are both missing. Where should I manage those resources? Create another TF plan? Create custom resources in Amplify? CDK? I’m thinking Amplify custom resources is a good place. To not have three different places that manage the resources. What do you think of extending this import feature to auto-generate and manage the missing resources with Amplify? Maybe a new nested stack with the missing resources?

renebrandel commented 3 years ago

Are we going to have commands for importing existing S3 buckets and REST APIs in the future? I am migrating to Amplify so I really need this one. Thank you!

Yes! That's on our roadmap. We're going to tackle S3 next. We're still looking into REST API imports. @AustinZhu can you provide a more detailed scenario for your REST API import?

renebrandel commented 3 years ago

Thanks for this new feature. I’m having the scenario that there is an existing pool with a Secret-App-Client. It’s managed with terraform. So the clientWeb and the IdentityPool are both missing. Where should I manage those resources? Create another TF plan? Create custom resources in Amplify? CDK?

I think the best way is to manage it in a single location whatever your IaC solution is.

I’m thinking Amplify custom resources is a good place. To not have three different places that manage the resources. What do you think of extending this import feature to auto-generate and manage the missing resources with Amplify? Maybe a new nested stack with the missing resources?

This is a good point. We might take this as an improvement in the future. The concern I have here is that it's pretty easy to get into inconsistent states. What's your current setup? Do you have terraform configuration as an example?

renebrandel commented 3 years ago

So far I have ran into no problems! Although I do have a question. From my understanding of the documentation, the app client without the secret key is for storing data of the todo list. I just have one Cognito user pool for AWS Amplify Authentication. Whenever I tried adding the Cognito user pool to the Cognito Identity Pool and running the command "Amplify import auth" from the Amplify CLI in the root of my project, it gave me an error saying I needed to have an app client without a secret key. Why is it required to have two app clients: one with a client secret and one without?

Various CLI components currently take dependency on this "shape" of the Cognito resource format. This might be a requirement that we'll drop in the future. Good call out. For now, you'll just need to add a second app client in order to conform to the Amplify requirements.

cyrfer commented 3 years ago

@renebrandel Is the "imported" resource specific to each environment? I use a different Cognito User Pool for each of my environments.

amirmishani commented 3 years ago

@renebrandel something I didn't see in the original RFC was extending Auth, API and Storage to use existing Lambdas. I really like AWS CDK and I want to create my Lambdas using CDK and Typescript. I want to be able to use these Lambdas (some are step functions, another feature amplify cli doesn't currently have) as resolvers for mutations or Auth Lambda triggers or DynamoDB triggers.

warrenmcquinn commented 3 years ago

I'm interested in making some modifications to an existing Amplify-managed Cognito User Pool (for example, enabling username case-insensitivity). I'm unclear about how amplify import auth might help.

Would it be possible for us to create a new self-managed Cognito User Pool, import users from current Amplify-managed Cognito, and then amplify import that new self-managed Cognito into our prod environment?

Edit: I'm aware of this solution to import using a user migration lambda trigger. Would that be the best solution?

renebrandel commented 3 years ago

@renebrandel Is the "imported" resource specific to each environment? I use a different Cognito User Pool for each of my environments.

You’ll be asked to either import a different Cognito resource or maintain the same Cognito resource for your app’s auth category.

If you want to have Amplify manage your auth resources in a new environment, run amplify remove auth to unlink the imported Cognito resource and amplify add auth to create new Amplify-managed auth resources in the new environment.

https://docs.amplify.aws/cli/auth/import#multi-environment-support

renebrandel commented 3 years ago

@renebrandel something I didn't see in the original RFC was extending Auth, API and Storage to use existing Lambdas. I really like AWS CDK and I want to create my Lambdas using CDK and Typescript. I want to be able to use these Lambdas (some are step functions, another feature amplify cli doesn't currently have) as resolvers for mutations or Auth Lambda triggers or DynamoDB triggers.

That's a good call out. I think we should look into that for next year.

renebrandel commented 3 years ago

I'm interested in making some modifications to an existing Amplify-managed Cognito User Pool (for example, enabling username case-insensitivity). I'm unclear about how amplify import auth might help.

Would it be possible for us to create a new self-managed Cognito User Pool, import users from current Amplify-managed Cognito, and then amplify import that new self-managed Cognito into our prod environment?

Edit: I'm aware of this solution to import using a user migration lambda trigger. Would that be the best solution?

I haven't tried it myself but if you are able to transfer the users out and want to fully manage the Cognito instance yourself, you could rereference that new Cognito instance through amplify import auth.

renebrandel commented 3 years ago

Also, just to provide an update, we've now enabled the ability to import S3 and DynamoDB tables to your Amplify project:

Read the blog post on how to import S3 buckets here: https://aws.amazon.com/blogs/mobile/use-an-existing-s3-bucket-for-your-amplify-project/ Docs on how to import existing S3 buckets and DynamoDB tables: https://docs.amplify.aws/cli/storage/import#import-an-existing-cognito-user-pool

Simon323 commented 3 years ago

Hello everyone, I created with my team serverless app on AWS. Currently our app is full control by AWS console. We have in plan create in back office to manage our app. For this purpose we want use Amplify, React and GraphQL. In my opinion amplify looks good because offer a lot of things out of the box. Currently the main problem is that Amplify don't want work with existing component our app. When I try create new GraphQL API which I want use in our new back office by command amplify add api I don't have option re-use existing tables in DynamoDB. Amplify allow only for create new GraphQL API in AppSync with new tables in DynamoDB. @renebrandel I saw your post a few days ago about import DynamoDB but info from this articles not resolve my issue. I think that functionality which I need is that Ability to use existing DynamoDB tables as a part of @model directive as a part of the GraphQL transformer annotated schema I understand that feature described above is not ready yet ?