aws-amplify / amplify-cli

The AWS Amplify CLI is a toolchain for simplifying serverless web and mobile development.
Apache License 2.0
2.81k stars 819 forks source link

Multi-region support #6314

Open fpronto opened 3 years ago

fpronto commented 3 years ago

Is your feature request related to a problem? There isn't a way to Create multi-region application

Describe the solution you'd like Cloudformation StackSets could be used to solve this problem DynamoDB GlobalTables

Additional context With some outages by AWS, my client have switched from my application to an older one because it's simply doesn't work... I'm currently trying to achieve this by myself but I am missing the knowledge to do it

I will wait for your feedback Keep up with the good work

akshbhu commented 3 years ago

Hi @fpronto

Thanks for the feature request 👍

Can you elaborate more on your usecase as it will help us in designing better solutions?

fpronto commented 3 years ago

in case of outage in one region (for example us-east-1), I would like to have a mechanism to go to another region, like a backup region

I wouldn't mind to do the manual switch of regions, but the resources/mechanism needed to be created by amplify-cli.

don't know what to say more to help

fpronto commented 3 years ago

let me know if you want more information about this

fpronto commented 3 years ago

Hi again

I'm trying to help by giving some help with cloudformation problems When the stackset try to create the new stack in a new region it will give an error because the deploy bucket (which have the .zip files for lambdas) is in another region. The solution is to copy all the zips into a new bucket and use it in the nestedstacks

I created the following resources to do this for me in every run:

Mappings:
  SourceCode:
    General:
      S3Bucket: CODE_BUCKET
      KeyPrefix: SOLUTION_NAME/SOLUTION_VERSION

Resources:
  LambdaZipsBucket:
    Type: AWS::S3::Bucket
  CopyZips:
    Type: Custom::CopyZips
    Properties:
      ServiceToken: !GetAtt "CopyZipsFunction.Arn"
      DestBucket: !Ref "LambdaZipsBucket"
      SourceBucket: !FindInMap ["SourceCode", "General", "S3Bucket"]
      Prefix: !FindInMap ["SourceCode", "General", "KeyPrefix"]
  CopyZipsRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: lambda.amazonaws.com
            Action: sts:AssumeRole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
      Path: /
      Policies:
        - PolicyName: lambda-copier
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Action:
                  - s3:GetObject
                Resource: !Sub
                  - "arn:aws:s3:::${S3Bucket}/${KeyPrefix}*"
                  - S3Bucket: !FindInMap ["SourceCode", "General", "S3Bucket"]
                    KeyPrefix: !FindInMap ["SourceCode", "General", "KeyPrefix"]
              - Effect: Allow
                Action:
                  - s3:ListBucket
                Resource:
                  - !Sub
                    - 'arn:aws:s3:::${S3Bucket}'
                    - S3Bucket: !FindInMap ['SourceCode', 'General', 'S3Bucket']
                  - !Sub
                    - 'arn:aws:s3:::${LambdaZipsBucket}'
                    - LambdaZipsBucket: !Ref 'LambdaZipsBucket'
              - Effect: Allow
                Action:
                  - s3:PutObject
                  - s3:DeleteObject
                Resource: !Sub
                  - "arn:aws:s3:::${LambdaZipsBucket}/${KeyPrefix}*"
                  - LambdaZipsBucket: !Ref "LambdaZipsBucket"
                    KeyPrefix: !FindInMap ["SourceCode", "General", "KeyPrefix"]
  CopyZipsFunction:
    Type: AWS::Lambda::Function
    Properties:
      Description: Copies objects from a source S3 bucket to a destination
      Handler: index.handler
      Runtime: python2.7
      Role: !GetAtt "CopyZipsRole.Arn"
      Timeout: 240
      Code:
        ZipFile: |
          import json
          import logging
          import threading
          import boto3
          import cfnresponse

          def copy_objects(source_bucket, dest_bucket, prefix):
              s3 = boto3.client('s3')
              response = s3.list_objects_v2(
                  Bucket=source_bucket,
                  Prefix=prefix,
              )
              objects = response['Contents']
              for o in objects:
                  key = o["Key"]
                  copy_source = {
                      'Bucket': source_bucket,
                      'Key': key
                  }
                  print('copy_source: %s' % copy_source)
                  print('dest_bucket = %s'%dest_bucket)
                  print('key = %s' %key)
                  s3.copy_object(CopySource=copy_source, Bucket=dest_bucket,
                        Key=key)

          def delete_objects(bucket, prefix):
              s3 = boto3.client('s3')
              response = s3.list_objects_v2(
                  Bucket=bucket,
                  Prefix=prefix,
              )
              objects = response['Contents']
              objects = {'Objects': [{'Key': o["Key"]} for o in objects]}
              s3.delete_objects(Bucket=bucket, Delete=objects)

          def timeout(event, context):
              logging.error('Execution is about to time out, sending failure response to CloudFormation')
              cfnresponse.send(event, context, cfnresponse.FAILED, {}, None)

          def handler(event, context):
              # make sure we send a failure to CloudFormation if the function
              # is going to timeout
              timer = threading.Timer((context.get_remaining_time_in_millis()
                        / 1000.00) - 0.5, timeout, args=[event, context])
              timer.start()

              print('Received event: %s' % json.dumps(event))
              status = cfnresponse.SUCCESS
              try:
                  source_bucket = event['ResourceProperties']['SourceBucket']
                  dest_bucket = event['ResourceProperties']['DestBucket']
                  prefix = event['ResourceProperties']['Prefix']
                  if event['RequestType'] == 'Delete':
                      delete_objects(dest_bucket, prefix)
                  else:
                      copy_objects(source_bucket, dest_bucket, prefix)
              except Exception as e:
                  logging.error('Exception: %s' % e, exc_info=True)
                  status = cfnresponse.FAILED
              finally:
                  timer.cancel()
                  cfnresponse.send(event, context, status, {}, None)

This some of the last resources uses the roles recommended by aws when using stackset, so that needs to be created before running this

Don't forget to pass the new bucket into every lambda so it can use the new deploy bucket for that region

I hope this helps implementing this (or at least someone who needs multi-region support)

[EDIT]: I edited this copy function so you wouldn't need to pass through argument all the functions that you need to copy Don't forget to change the folder of your lambda so it will change every time you update the stack (something like a version system or date)

jonmifsud commented 3 years ago

Today there's just been an outage - in Excess of 30mins on the AWS Amplify - Ireland Region.

Unfortunately I'm not aware of a quick way to have our Amplify Stack set-up with a backup region so all/most of our clients based in the Ireland region are down until issue is resolved. Would be really nice to have this set up.

fpronto commented 3 years ago

@jonmifsud I don't think they will implement this feature soon. If you really need a backup region, I recommend you to learn about cloudformation Stack-Set, there is a lot of information about this. Be aware that there isn't a DynamoDB Global Tables for cloudformation.

You can use the amplify ymls/jsons because they use cloudformation to do this in the hood. You just need to explore the amplify files.

PatrykMilewski commented 3 years ago

@fpronto Global tables are now supported with CloudFormation.

@jonmifsud It's exactly the same case for us, we are trying to find the solution to this problem, so the service is no longer down during outages. In our case the outage was actually longer (around 60 minutes).

sergiorodriguez82 commented 2 years ago

I think this should be priority, taking into account last week service outage in the us-east-01 region.

PatrykMilewski commented 2 years ago

this issue will be coming alive each time one of main regions is down

brienpafford commented 2 years ago

I also would like to request this feature be worked on -- it is critical, as others have mentioned, in light of recent major outages in us-east-1 that has happened more than once in recent months. In addition, this outage wasn't just for an hour as some have noted above, but for several hours

Ajmaljalal commented 2 years ago

I also request this feature.

talaikis commented 2 years ago

Interested in this too fro PubSub, but for most part I can pretty easily replace everything on backend side. Problem is with how to remove global Amplify.addPluggable(new AWSIoTProvider({ aws_pubsub_region: .... })) on the frontend.

ag-rocket commented 1 year ago

Do we still have plans here to support multi-region? Especially frontend only applications.

Is there a recommended path to fail over from CDN failures here?

us-east-1 outage on 12/17 that sent users to a page that looks like this:

cloudfront_503

reported outage

aws_outage

YazidHamdi commented 1 year ago

Also request this feature - although my use case is different: I'm thinking more of a "regional DNS redirect", like you hit the main domain and it either reroutes you to an explicitly different domain (with a prefix maybe) with an independent regional deployment, or it does that under the hood and you don't see a domain change. This is possible by leveraging Route 53 features but also the Amplify console does perform region filtering (you can set it up in the web console) where you specify country codes from which your app will accept traffic. There's this use case, but I also support the "same app everywhere with data sync" use case described by others here.

121940kz commented 1 year ago

Once again also request this feature. Either multi AZ or multi region, something would be better than nothing. US-EAST-1 down today, been about 30 minutes still down. Would be nice to have an easy way to have multi az or region support in amplify apps.

tminus1-design commented 1 year ago

It would be great if we could have the ability to deploy a global table using the CLI that can be synced with AppSync as a data source. This feature would make it much easier to deploy applications in multiple regions. We can then manually overwrite the frontend Graph or Rest API endpoint, having this feature would streamline the deployment process and save us time.

tminus1-design commented 1 year ago

As of now, the only way to achieve this is by deploying the Global Table separately and then importing it into Amplify. However, this process requires deploying the appropriate IAM roles and other configurations as a custom resource, which can make things more complicated.

DarylSerrano commented 7 months ago

As of now, the only way to achieve this is by deploying the Global Table separately and then importing it into Amplify. However, this process requires deploying the appropriate IAM roles and other configurations as a custom resource, which can make things more complicated.

Can you please show me how did you achieve this? I'm trying to make the Appsync Multi-region using a Global Table as Data source but Amplify allow only for create new GraphQL API in AppSync with new tables in DynamoDB.