aws-samples / serverless-patterns

Serverless patterns. Learn more at the website: https://serverlessland.com/patterns.
https://serverlessland.com
Other
1.48k stars 863 forks source link

Api Gateway to Lambda to Rds Proxy to RDS Postgres Instance #1

Closed omenking closed 2 years ago

omenking commented 3 years ago

To submit a template to the Serverless Patterns Collection, submit an issue with the following information.

๐Ÿšง This is a work in progress. ๐Ÿšง I am creating this ticket so I can document my journey for submission. ๐Ÿšง I will update as I go

Use the model template located at https://github.com/aws-samples/serverless-patterns/tree/main/_pattern-model to set up a README, template and any associated code.

Description (mid-length e.g. "Create a Lambda function that sends events to EventBridge.")

Create API Endpoints to a Lambda function that queries a Postgres database on RDS

Language: (optional e.g. "Python", if you have a Lambda function in your example)

Python

YouTube videoId (optional e.g. "VI79XQW4dIM")

[WIP]

Framework (currently we support SAM or CDK)

SAM

Services from/to (e.g. "Lambda to EventBridge)

API Gateway to Lambda to Rds Proxy to RDS Postgres Instance

Description (this must include a throughout explanation of the pattern together with details of IAM permissioning)

[WIP]

Deployment commands

[WIP]

GitHub PR for template:

[WIP]

Payload example (e.g. Lambda event payload from source service).

[WIP]

Additional resources (optional: link and anchor text, up to 5 resources)

[WIP]

Author bio

Name: Andrew Brown Photo URL: https://pbs.twimg.com/profile_images/1340449423940325380/RZ6J4hDo_400x400.jpg Twitter handle: @andrewbrown Description (up to 255 chars): AWS Community Hero from Canada ๐Ÿ‡จ๐Ÿ‡ฆ

omenking commented 3 years ago

I have code for this between two SAM templates. I just need to refactor into a single template and follow the document outline and you should have yourself a very nice submission soon

omenking commented 3 years ago

So this is some of my existing code. Though I have two templates. one that bootstraps an API Gateway, and then this one which allows you to attach endpoints as separate SAM templates.

The reason I have a Dockerfiile is because of the Postgres Native Extensions library.

I'll simplify this and make it one template.

AWSTemplateFormatVersion : '2010-09-09'
# Transform: AWS::Serverless allows us to use SAM specific CloudFormation language eg. AWS::Serverless::Api
Transform: 'AWS::Serverless-2016-10-31'
Parameters:
  ApiStack:
    Type: String
  DatabaseStack:
    Type: String
  MemorySize:
    Type: Number
    Description: 'The memory size in megabytes for the  Lambda'
    Default: 128
    MinValue: 128
    MaxValue: 10240
  Timeout:
    Type: Number
    Description: 'The timeout in seconds for the  Lambda'
    Default: 10
    MinValue: 1
    MaxValue: 900
  SubscriptionEndpoint:
    Type: String
  SubscriptionProtocol:
    Type: String
    AllowedValues:
      - http
      - https
  AllowOrigin:
    Type: String
Resources:
  PsqlSubscription:
    Type: AWS::SNS::Subscription
    Properties:
      Endpoint: !Ref SubscriptionEndpoint
      Protocol: !Ref SubscriptionProtocol
      TopicArn: !Ref PsqlTopic
  PsqlTopic:
    Type: AWS::SNS::Topic
  PsqlResource:
    Type: AWS::ApiGateway::Resource
    Properties:
      RestApiId:
        Fn::ImportValue: !Sub ${ApiStack}ApiId
      ParentId:
        Fn::ImportValue: !Sub ${ApiStack}ApiRootResourceId
      PathPart: 'psql'
  PsqlMethod:
    Type: AWS::ApiGateway::Method
    Properties:
      AuthorizationType: NONE
      HttpMethod: POST
      ResourceId: !Ref PsqlResource
      RestApiId:
        Fn::ImportValue: !Sub ${ApiStack}ApiId
      Integration:
        IntegrationHttpMethod: POST
        Type: AWS_PROXY
        Uri: !Sub
          - arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${lambdaArn}/invocations
          - lambdaArn: !GetAtt PsqlFunction.Arn
        IntegrationResponses:
        - StatusCode: '200'
          ResponseParameters:
            method.response.header.Access-Control-Allow-Headers: "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'"
            method.response.header.Access-Control-Allow-Methods: "'POST'"
            method.response.header.Access-Control-Allow-Origin: !Sub "'${AllowOrigin}'"
      MethodResponses:
      - StatusCode: '200'
        ResponseParameters:
          method.response.header.Access-Control-Allow-Headers: false
          method.response.header.Access-Control-Allow-Methods: false
          method.response.header.Access-Control-Allow-Origin: false
  PsqlPermission:
    Type: AWS::Lambda::Permission
    Properties:
      Action: lambda:InvokeFunction
      FunctionName: !GetAtt PsqlFunction.Arn
      Principal: apigateway.amazonaws.com
      SourceArn:
        Fn::Join:
          - ""
          - - 'arn:aws:execute-api:'
            - !Ref AWS::Region
            - ':'
            - !Ref AWS::AccountId
            - ':'
            - { "Fn::ImportValue" : {"Fn::Sub": "${ApiStack}ApiId" } }
            - '/*/*/psql'
  # Create an AWS Lambda function
  PsqlFunction:
    Type: AWS::Serverless::Function
    # We need to specify MetaData for SAM build command
    Metadata:
      DockerContext: ../
      Dockerfile: Dockerfile
    Properties:
      PackageType: Image
      ## Turn on X-Ray Tracing
      # Tracing: Active
      MemorySize: !Ref MemorySize
      Timeout: !Ref Timeout
      Role: !GetAtt PsqlFunctionRole.Arn
      # https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/automating-updates-to-serverless-apps.html
      # AutoPublishAlias: By adding this property and specifying an alias name, AWS SAM:
      # - Detects when new code is being deployed, based on changes to the Lambda function's Amazon S3 URI.
      # - Creates and publishes an updated version of that function with the latest code.
      # - Creates an alias with a name that you provide (unless an alias already exists), and points to the updated version of the Lambda function. Function invocations should use the alias qualifier to take advantage of this. If you aren't familiar with Lambda function versioning and aliases, see AWS Lambda Function Versioning and Aliases .#
      AutoPublishAlias: live
      Environment:
        Variables:
          TOPIC_ARN:
            !Ref PsqlTopic
          DB_PORT:
            Fn::ImportValue: !Sub ${DatabaseStack}RdsPort
          DB_USER:
            Fn::ImportValue: !Sub ${DatabaseStack}RdsUsername
          DB_HOST:
            Fn::ImportValue: !Sub ${DatabaseStack}RdsProxyEndpoint
          DB_DATABASE:
            Fn::ImportValue: !Sub ${DatabaseStack}RdsDatabaseName
      DeploymentPreference:
        Type: AllAtOnce
      VpcConfig:
        SecurityGroupIds:
          - Fn::ImportValue: !Sub ${DatabaseStack}LambdaSgGroupId
        SubnetIds:
          Fn::Split: [',', {'Fn::ImportValue': { 'Fn::Sub' : '${DatabaseStack}SubnetIds' } } ]
  PsqlFunctionRole:
    Type: AWS::IAM::Role
    Properties:
      Path: /
      # The trust policy that is associated with this role.
      # Trust policies define which entities can assume the role.
      # You can associate only one trust policy with a role.
      # We want to only entrust AWS Lambda with this role.
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Action: ['sts:AssumeRole']
          Effect: Allow
          Principal:
            Service: [lambda.amazonaws.com]
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AWSLambdaExecute
        - arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
      Policies:
        - PolicyName: PsqlFunctionPolicy
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Action:
                - sns:Publish
                Effect: Allow
                Resource:
                  - !Ref PsqlTopic
              - Effect: Allow
                Action:
                  - rds-db:connect
                Resource:
                  Fn::Join:
                    - ""
                    - - 'arn:aws:rds-db:'
                      - !Ref AWS::Region
                      - ':'
                      - !Ref AWS::AccountId
                      - ':dbuser:'
                      - { "Fn::ImportValue" : {"Fn::Sub": "${DatabaseStack}RdsProxyId" } }
                      - '/'
                      - { "Fn::ImportValue" : {"Fn::Sub": "${DatabaseStack}RdsUsername" } }
omenking commented 3 years ago

Hmm, I might not need a dockerfile since aws-psycopg2 is compiled specifically for the Python Runtime. In my real use case I had a bunch of additional libraries.

boto3
aws-psycopg2

In real use-case, Docker would likely be more preferable, but using the Lambda Python Runtime would be easier for beginners. I suppose I could always submit two templates.

If anyone is curious of the contents of the Dockerfile:

# Use AWS python base image
FROM public.ecr.aws/lambda/python:3.8

# Copy over function specific code
COPY function/function.py function/requirements.txt /var/task/

# Install requirements
RUN python3.8 -m pip install -r /var/task/requirements.txt

# Set the file function.py and the function handler as the lambda function.
CMD ["function.handler"]
omenking commented 3 years ago

Here is the general code that is needed. I just refactored this without testing yet.

import boto3
import psycopg2

# Logging for CloudWatch
logger = logging.getLogger()
logging.basicConfig(level = logging.INFO)
logger.setLevel(logging.INFO)

rds = boto3.client('rds')

# We authenicate via IAM and we pass this along the RDS Proxy
def iam_auth_token():
  logger.info('iam_auth_token')
  return rds.generate_db_auth_token(DBHostname=config['host'], Port=config['port'], DBUsername=config['user'], Region='us-east-1')

def execute_query(query):
  logger.info('execute_query')
  new_config = config.copy()
  new_config['password'] = iam_auth_token()
  new_config['sslmode'] = 'require'
  new_config['connect_timeout'] = 3

  try:
    logger.info('psycopg2:connect')
    logger.info(new_config)
    connection = psycopg2.connect(**new_config)
  except Exception as err:
    logger.info('psycopg2:error:connect')
    logger.error(err)

  cursor = connection.cursor()

  # query for data
  try:
    logger.info('psycopg2:execute')
    cursor.execute(query)
  except Exception as err:
    logger.info('psycopg2:error:execute')
    logger.error(err)

  rows = cursor.fetchall()

  column_names = []
  for elt in cursor.description:
    column_names.append(elt[0])
  rows.insert(0,column_names)
  logger.info('psycopg2:error:close')
  connection.close()
  return rows

def handler(event, context):
  logger.info(event)

  body = json.loads(event['body'])
  query = body['query']

  rows = execute_query(query)
  logger.info('rows_returned')
  logger.info(rows)

  return {
      'statusCode': 200,
      'body': json.dumps(rows)
  }
omenking commented 3 years ago

I had my database stack as a separate cross-stack. I just brought it into this one. Now just need to bring in the API layer

AWSTemplateFormatVersion : '2010-09-09'
# Transform: AWS::Serverless allows us to use SAM specific CloudFormation language eg. AWS::Serverless::Api
Transform: 'AWS::Serverless-2016-10-31'
Parameters:
  ApiStack:
    Type: String
  # Database Parameters --------------------
  Username:
    Type: String
  BackupRetentionPeriod:
    Type: Number
    Default: 0
  InstanceClass:
    Type: String
    Default: db.t2.micro
  EngineVersion:
    Type: String
    #  DB Proxy only supports very specific versions of Postgres
    #  https://stackoverflow.com/questions/63084648/which-rds-db-instances-are-supported-for-db-proxy
    Default: '11.5'
  PubliclyAccessible:
    Type: String
    AllowedValues:
      - true
      - false
    Default: false
  DeletionProtection:
    Type: String
    AllowedValues:
      - true
      - false
    Default: false
  RdsDatabaseName:
    Type: String
  RdsPort:
    Type: Number
    Default: 5432
  VpcId:
    Type: AWS::EC2::VPC::Id
  SubnetIds:
    Type: List<AWS::EC2::Subnet::Id>
  # Lambda Parameters ----------------------
  MemorySize:
    Type: Number
    Description: 'The memory size in megabytes for the  Lambda'
    Default: 128
    MinValue: 128
    MaxValue: 10240
  Timeout:
    Type: Number
    Description: 'The timeout in seconds for the  Lambda'
    Default: 10
    MinValue: 1
    MaxValue: 900
  AllowOrigin:
    Type: String
Resources:
  PsqlTopic:
    Type: AWS::SNS::Topic
  PsqlResource:
    Type: AWS::ApiGateway::Resource
    Properties:
      RestApiId:
        Fn::ImportValue: !Sub ${ApiStack}ApiId
      ParentId:
        Fn::ImportValue: !Sub ${ApiStack}ApiRootResourceId
      PathPart: 'psql'
  PsqlMethod:
    Type: AWS::ApiGateway::Method
    Properties:
      AuthorizationType: NONE
      HttpMethod: POST
      ResourceId: !Ref PsqlResource
      RestApiId:
        Fn::ImportValue: !Sub ${ApiStack}ApiId
      Integration:
        IntegrationHttpMethod: POST
        Type: AWS_PROXY
        Uri: !Sub
          - arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${lambdaArn}/invocations
          - lambdaArn: !GetAtt PsqlFunction.Arn
        IntegrationResponses:
        - StatusCode: '200'
          ResponseParameters:
            method.response.header.Access-Control-Allow-Headers: "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'"
            method.response.header.Access-Control-Allow-Methods: "'POST'"
            method.response.header.Access-Control-Allow-Origin: !Sub "'${AllowOrigin}'"
      MethodResponses:
      - StatusCode: '200'
        ResponseParameters:
          method.response.header.Access-Control-Allow-Headers: false
          method.response.header.Access-Control-Allow-Methods: false
          method.response.header.Access-Control-Allow-Origin: false
  PsqlPermission:
    Type: AWS::Lambda::Permission
    Properties:
      Action: lambda:InvokeFunction
      FunctionName: !GetAtt PsqlFunction.Arn
      Principal: apigateway.amazonaws.com
      SourceArn:
        Fn::Join:
          - ""
          - - 'arn:aws:execute-api:'
            - !Ref AWS::Region
            - ':'
            - !Ref AWS::AccountId
            - ':'
            - { "Fn::ImportValue" : {"Fn::Sub": "${ApiStack}ApiId" } }
            - '/*/*/psql'
  # Create an AWS Lambda function
  PsqlFunction:
    Type: AWS::Serverless::Function
    # We need to specify MetaData for SAM build command
    Metadata:
      DockerContext: ../
      Dockerfile: Dockerfile
    Properties:
      PackageType: Image
      ## Turn on X-Ray Tracing
      # Tracing: Active
      MemorySize: !Ref MemorySize
      Timeout: !Ref Timeout
      Role: !GetAtt PsqlFunctionRole.Arn
      # https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/automating-updates-to-serverless-apps.html
      # AutoPublishAlias: By adding this property and specifying an alias name, AWS SAM:
      # - Detects when new code is being deployed, based on changes to the Lambda function's Amazon S3 URI.
      # - Creates and publishes an updated version of that function with the latest code.
      # - Creates an alias with a name that you provide (unless an alias already exists), and points to the updated version of the Lambda function. Function invocations should use the alias qualifier to take advantage of this. If you aren't familiar with Lambda function versioning and aliases, see AWS Lambda Function Versioning and Aliases .#
      AutoPublishAlias: live
      Environment:
        Variables:
          TOPIC_ARN:
            !Ref PsqlTopic
          DB_PORT: !Ref RdsPort
          DB_USER: !Sub Username
          DB_HOST: !GetAtt DBProxy.Endpoint
          DB_DATABASE: !GetAtt LambdaSg.GroupId
      DeploymentPreference:
        Type: AllAtOnce
      VpcConfig:
        SecurityGroupIds: !GetAtt LambdaSg.GroupId
        SubnetIds: !Ref SubnetIds
  PsqlFunctionRole:
    Type: AWS::IAM::Role
    Properties:
      Path: /
      # The trust policy that is associated with this role.
      # Trust policies define which entities can assume the role.
      # You can associate only one trust policy with a role.
      # We want to only entrust AWS Lambda with this role.
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Action: ['sts:AssumeRole']
          Effect: Allow
          Principal:
            Service: [lambda.amazonaws.com]
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AWSLambdaExecute
        - arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
      Policies:
        - PolicyName: PsqlFunctionPolicy
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Action:
                - sns:Publish
                Effect: Allow
                Resource:
                  - !Ref PsqlTopic
              - Effect: Allow
                Action:
                  - rds-db:connect
                Resource:
                  Fn::Join:
                    - ""
                    - - 'arn:aws:rds-db:'
                      - !Ref AWS::Region
                      - ':'
                      - !Ref AWS::AccountId
                      - ':dbuser:'
                      - !Ref DBProxy
                      - '/'
                      - !Ref Username

  DBProxyRole:
    Type: AWS::IAM::Role
    Properties:
      Path: /
      # The trust policy that is associated with this role.
      # Trust policies define which entities can assume the role.
      # You can associate only one trust policy with a role.
      # We want to only entrust AWS Lambda with this role.
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Action: ['sts:AssumeRole']
          Effect: Allow
          Principal:
            Service: [rds.amazonaws.com]
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AWSLambdaExecute
      Policies:
        - PolicyName: DBProxyPolicy
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Action:
                - secretsmanager:*
                Effect: Allow
                Resource:
                  - !Ref RdsSecret
  # you can't log into the proxy unless you're in the same VPC.
  DBProxy:
    Type: AWS::RDS::DBProxy
    DependsOn:
      - DBProxyRole
    Properties:
      Auth:
        - { AuthScheme: SECRETS, SecretArn: !Ref RdsSecret, IAMAuth: REQUIRED  }
      DBProxyName: 'rds-proxy'
      EngineFamily: 'POSTGRESQL'
      RoleArn: !GetAtt DBProxyRole.Arn
      IdleClientTimeout: 120
      RequireTLS: true
      DebugLogging: false
      VpcSubnetIds: !Ref SubnetIds
      VpcSecurityGroupIds:
        - !GetAtt RDSPostgresSG.GroupId
  ProxyTargetGroup:
    Type: AWS::RDS::DBProxyTargetGroup
    Properties:
      DBProxyName: !Ref DBProxy
      DBInstanceIdentifiers: [!Ref RdsInstance]
      TargetGroupName: default
      ConnectionPoolConfigurationInfo:
        MaxConnectionsPercent: 12
        MaxIdleConnectionsPercent: 11
        ConnectionBorrowTimeout: 120
  LambdaSg:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Security Groups for the AWS Lambda that need access to RDS
      GroupName: 'sandbox-lambda-sg'
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: 80
          ToPort: 80
          CidrIp: 0.0.0.0/0
      VpcId: !Ref VpcId
  RDSPostgresSG:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Security Groups for RDS  PSQL
      GroupName: 'sandbox-rds-sg'
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: !Ref RdsPort
          ToPort: !Ref RdsPort
          SourceSecurityGroupId: !GetAtt LambdaSg.GroupId
        - IpProtocol: tcp
          FromPort: !Ref RdsPort
          ToPort: !Ref RdsPort
          CidrIp: 0.0.0.0/0
      VpcId: !Ref VpcId
  RdsInstance:
    Type: AWS::RDS::DBInstance
    # TODO - Remember to change this back to snapshot for production!
    # can't use !Ref on DeletionPolicy and Conditions sucks
    DeletionPolicy: 'Delete'
    Properties:
      MasterUsername: !Ref Username
      MasterUserPassword: !Join ['', ['{{resolve:secretsmanager:', !Ref RdsSecret, ':SecretString:password}}' ]]
      AllocatedStorage: '20'
      AllowMajorVersionUpgrade: true
      AutoMinorVersionUpgrade: true
      # This should be turned off When using AuthIAM on the Proxy.
      Port: !Ref RdsPort
      EnableIAMDatabaseAuthentication: false
      BackupRetentionPeriod: !Ref BackupRetentionPeriod
      DBInstanceClass: !Ref InstanceClass
      DBName: !Ref RdsDatabaseName
      Engine: postgres
      DeletionProtection: !Ref DeletionProtection
      EngineVersion: !Ref EngineVersion
      PubliclyAccessible: !Ref PubliclyAccessible
      VPCSecurityGroups:
        - !GetAtt RDSPostgresSG.GroupId
  RdsSecret:
    Type: 'AWS::SecretsManager::Secret'
    Properties:
      Name: RdsSandboxSecret
      Description: "This secret has a dynamically generated secret password."
      GenerateSecretString:
        SecretStringTemplate: !Sub '{ "username": "${Username}" }'
        GenerateStringKey: "password"
        PasswordLength: 30
        ExcludeCharacters: '"@/\'
jbesw commented 3 years ago

Hi Andrew! Let me know when this is ready for submission and we will put in the queue over here.

jbesw commented 2 years ago

Andrew, I'm going to close this for now to help keep the issues list current but please reopen when you have a completed pattern for submission.