aws / serverless-application-model

The AWS Serverless Application Model (AWS SAM) transform is a AWS CloudFormation macro that transforms SAM templates into CloudFormation templates.
https://aws.amazon.com/serverless/sam
Apache License 2.0
9.31k stars 2.37k forks source link

How to reference an existing s3 bucket & prefix #124

Closed wliao008 closed 7 years ago

wliao008 commented 7 years ago

From this example: https://github.com/awslabs/serverless-application-model/blob/master/examples/2016-10-31/s3_processor/template.yaml, it creates a new bucket. However I need to reference an existing bucket, for example, I want to trigger the lambda when a *.yaml file is uploaded to s3:/mybucket/folder?

vikrambhatt commented 7 years ago

Hi,

At this moment SAM does not support an existing bucket as an event source. It is mentioned in the documentation: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#s3

"NOTE: To specify an S3 bucket as an event source for a Lambda function, both resources have to be declared in the same template. AWS SAM does not support specifying an existing bucket as an event source."

wliao008 commented 7 years ago

Hmm ok, but I do need to have the lambda listen on an existing bucket/folder, what are the walkarounds if I still want to make use of SAM?

iconara commented 7 years ago

I just came across this too. Will this be fixed? This makes SAM very hard to use for the S3 event use case.

sbarski commented 7 years ago

I want to up-vote this well. At the moment this is a showstopper for me too.

vikrambhatt commented 7 years ago

This is not supported in Cloudformation. Basically, cloudformation cannot change any aws resource outside of the stack. Unfortunately, as of now, there is no workaround for this limitation.

sbarski commented 7 years ago

@vikrambhatt do you think AWS will come out with any tooling on top of SAM/CFN to assist with cases such as this. It does make SAM hard to use unfortunately.

mwarner1 commented 7 years ago

It is possible to have CFT send parameters to and invoke a lambda process where you can programmatically make changes to existing resources. However, note that your lambda has to also be able to delete any changes it makes when the stack is deleted (CF will not know about these changes and cannot auto-delete them).

sanathkr commented 7 years ago

Yeah, CloudFormation folks are aware of this limitation and working to solve it. We don't have an ETA yet, but I want to let you guys know that this is in the works.

I am going to close this Issue because SAM is helpless without the CFN feature.

helenoalves commented 6 years ago

I know this issue is closed, but, when will we have some news about it ?

sanyer commented 6 years ago

Faced this limitation recently and workaround it with combination of S3->SNS and SAM<-SNS. Works pretty good and completely automated.

helenoalves commented 6 years ago

Thanks @sanyer for your feedback ! I never used SNS before, amazon has a lot of amazing tools and I dont know all. Me and my friend @alexiscviurb , an amazing infrastructure engineer, created a script to automate the task in a Jenkins Pipeline. This shell script is doing this steps:

  1. Create With Cloud Formation The Functions

  2. List stack resources: aws cloudformation list-stack-resources --stack-name analytics-functions --query 'StackResourceSummaries[?LogicalResourceId=='$1'].[PhysicalResourceId]' --output text

  3. Get Lambda Functions: aws lambda list-functions --query 'Functions[?FunctionName=='$FunctionName'].[FunctionArn]' --output text

  4. Replace in SAM JSON FunctionName sed -i "s/FunctionName::ARN/$FunctionName/" sam-configuration.json

  5. Remove old Functions of Bucket aws s3api put-bucket-notification-configuration --bucket=bucket-name --notification-configuration="{}"

  6. Bind the new Functions with Bucket aws s3api put-bucket-notification-configuration --bucket=bucket-name --notification configuration file://sam-configuration.json

It's a workauround too, but I hope it helps somebody. Regards, Heleno

rzijp commented 6 years ago

@sanyer, any chance that you could share more details about your automation? Did you use SAM (not sure if this is limited by #249 as well), or other means?

sanyer commented 6 years ago

@rzijp I'll try to remember and find where and how it was done.

landorid commented 6 years ago

For existing s3 bucket, you can use this serverless plugin.

golharam commented 6 years ago

Well this just sucks. We should be able to specify arn references to existing buckets. You allow it for ManagedPolicyArns for IAM ROLE, referencing a bucket shouldn't be an issue...unless a change is being made on the bucket itself?

sworisbreathing commented 5 years ago

If that's the case then the documentation is incorrect, since it gives an example of referencing a bucket that is not managed by SAM.

didopop3 commented 5 years ago

If that's the case then the documentation is incorrect, since it gives an example of referencing a bucket that is not managed by SAM.

it has clearly said "to specify an S3 bucket as an event source for a Lambda function, both resources have to be declared in the same template. AWS SAM does not support specifying an existing bucket as an event source." https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#example-awsserverlessfunction

We need the feature to reference an existing s3 bucket

metaskills commented 5 years ago

The limitation of CloudFormation makes complete sense to me. I was initially upset to hit this limitation myself and put my head down on what I think is a good workaround that fits both CloudFormation and SAM best practices. Of course, using Bash as a little bit of IaC glue as needed. The solution, first use no event in your template.yaml file and also add permission for the S3 bucket to invoke the function. To also make this work, output the functions arn. Pretty much what @helenoalves shared.

Resources:
  MyFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: .
      Handler: app.handler
      Runtime: ruby2.5
  ImageBucketPermission:
    Type: AWS::Lambda::Permission
    Properties:
      Action: 'lambda:InvokeFunction'
      FunctionName: !Ref MyFunction
      Principal: s3.amazonaws.com
      SourceAccount: !Ref 'AWS::AccountId'
      SourceArn: !Sub arn:aws:s3:::my-bucket-name

Outputs:
  MyFunctionArn:
    Description: My Function Arn
    Value: !GetAtt MyFunction.Arn

In order to connect the S3 events a little one time Bash script, I usually put these in the projects ops directory, is used.

FUNCARN=$(aws cloudformation describe-stacks \
  --stack-name "my-stack-name" \
  --query 'Stacks[0].Outputs[0].OutputValue'
)

JSON=$(cat <<-EOF
  {
    "LambdaFunctionConfigurations": [
      {
        "Id": "MyEventsName",
        "LambdaFunctionArn": ${FUNCARN},
        "Events": [
          "s3:ObjectCreated:*"
        ]
      }
    ]
  }
EOF
)

aws s3api \
  put-bucket-notification-configuration \
  --bucket="my-bucket-name" \
  --notification-configuration "$JSON"
dragonfax commented 5 years ago

Just another "me too". I hit this today.

tomcant commented 5 years ago

I hit this issue recently and used the solution proposed above by @metaskills as a workaround. I've written a Bash script to make the whole thing a bit simpler. Hopefully someone else that lands on this thread will find it useful, and if anyone wants to suggest an improvement then please do: https://gist.github.com/tomcant/c31a08123673e91d9560737f4380cff0.

Here's the script usage information:

Configure an S3 bucket ObjectCreated notification for the given Lambda function.

Usage: ./configure-s3-lambda-notification.sh BUCKET FUNCTION

Arguments:
  BUCKET     name of the S3 bucket that should trigger the notification
  FUNCTION   name of the Lambda function that should receive the notification

The script uses the AWS CLI (tested with version 1.16.276) so you'll need to supply valid AWS credentials for the account containing the resources. How you invoke the script depends on how you supply your credentials. I usually set the AWS_PROFILE environment variable or AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY / AWS_DEFAULT_REGION, like this:

AWS_PROFILE=profile ./configure-s3-lambda-notification.sh BUCKET FUNCTION

or...

AWS_DEFAULT_REGION=region AWS_ACCESS_KEY_ID=key AWS_SECRET_ACCESS_KEY=secret ./configure-s3-lambda-notification.sh BUCKET FUNCTION

If you provide your credentials in some other way (e.g. EC2 instance metadata) then running the script without the extra environment variables should work just fine.

The script also takes care of adding permissions for S3 to invoke the function, if necessary. The biggest limitation right now is that the script doesn't support setting filters on the notification (e.g. path prefix/suffix), but that can easily be updated on the bucket UI afterwards.

Note that the jq JSON processor is also required.

OscarVanL commented 5 years ago

Ran into this today too, kinda disappointed this isn't supported.

sbilello commented 4 years ago

+1

NiminU commented 4 years ago

Need this functionality for one of our use cases, hope this will be considered soon.

gilshelef commented 4 years ago

+1

subzero112233 commented 4 years ago

+1

valeramaniuk commented 4 years ago

+1

IanLeeClaxton commented 4 years ago

+1 The only other option is terraform but this has its issues too.

djm commented 4 years ago

Amazon just released CloudFormation support for importing existing resources into a given stack. I haven't tried it yet, but it does sound like this was the upstream feature this thread was waiting for so I thought I'd share.

andrx commented 4 years ago

+1 for now switched those to terraform.

ecmonsen commented 4 years ago

+1

kaushalye commented 4 years ago

+1

monasuncion commented 4 years ago

+1

johnwei2019 commented 4 years ago

+1

IanLeeClaxton commented 4 years ago

+1 Thanks

Ian Claxton Director | OTIITO LTD +44 (0)7926 040509


From: Jichao(John) Wei notifications@github.com Sent: Friday, January 24, 2020 4:10:53 AM To: awslabs/serverless-application-model serverless-application-model@noreply.github.com Cc: Ian claxton ian_claxton@hotmail.com; Comment comment@noreply.github.com Subject: Re: [awslabs/serverless-application-model] How to reference an existing s3 bucket & prefix (#124)

+1

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fawslabs%2Fserverless-application-model%2Fissues%2F124%3Femail_source%3Dnotifications%26email_token%3DAA67FSJ3CXH23G7YPZ7I54DQ7JS43A5CNFSM4DIBTD6KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJZVIRA%23issuecomment-577983556&data=02%7C01%7C%7C63c3d13d837f47fab58008d7a083680c%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637154358548692721&sdata=YRHIfoxDF%2Fn8B8%2B2zZ8GCms532wWeqERho%2Fj8quNzSk%3D&reserved=0, or unsubscribehttps://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAA67FSOK7XZMEDWQ2UQBW73Q7JS43ANCNFSM4DIBTD6A&data=02%7C01%7C%7C63c3d13d837f47fab58008d7a083680c%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637154358548702732&sdata=ViGFs06ni3GBrwJJF4DYf47UGNkDreNATCyzEPtAGOQ%3D&reserved=0.

keetonian commented 4 years ago

@djm I looked into that for this particular problem. It works if you only have one stack that needs to reference the bucket; if you want to reference it from multiple stacks, however, it doesn't work and another solution is needed.

CJohnsonLehi commented 4 years ago

Can't believe this is still an issue. It could have simply been solved by adding an extra property to each resource called: Existing: <true|false>

nicofuccella commented 4 years ago

+1

Kinnary-Raichura commented 4 years ago

+1

@CJohnsonLehi : Existing: <true|false> works for serverless framework, and not for SAM model

maxalbrecht commented 4 years ago

+1. Just to note that this functionality/issue was requested 2 and a half years ago.

austingriff commented 4 years ago

+1

austingriff commented 4 years ago

honestly i'm about to abandon SAM and just go straight cloud formation. without this feature SAM is useless

maxalbrecht commented 4 years ago

For anyone looking for a workaround:

I ran my CloudFormation template without the event, and added the event afterward manually on the aws console. It's not ideal, but I only had to do it once, and I have been able to update the code in the lambda function without issues.

robin-zhao commented 4 years ago

+1

Still an issue.

sahil-gt commented 4 years ago

+1 No updates as to what the situation is regarding this issue?

miekassu commented 4 years ago

+1 We need this.

martimfj commented 4 years ago

+1

Still an issue.

adamclark64 commented 4 years ago

...Still an issue.

keetonian commented 4 years ago

https://github.com/aws-cloudformation/aws-cloudformation-coverage-roadmap/issues/79

thenninger commented 4 years ago

+1

victor-samson-mo commented 4 years ago

+1