Closed wieshka closed 1 year ago
Finished up actual code - can be found here to replicate issue: https://github.com/wieshka/additional-cloudfront-metrics-from-access-logs
@wieshka What do you mean by 'Event is not being created'?
Are you not seeing the AWS::Lambda::Permission
or a missing NotificationConfiguration
Property on the bucket?
I deployed the template you have above, uploaded a file to S3, and saw the LambdaFunction be triggered. Am I missing something?
If you use the S3 event in SAM, then you don't see any trigger in the Lambda configuration panel. But the Lambda function is executed when you drop a file in the S3 bucket.
I'm seeing the same behavior as @bottemav
Used the provided template to reproduce the issue.
The Lambda permissions created by SAM looks like this:
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Sid": "LogToWatchS3CreateObjectPermission-F3WNVM1OUAMX",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "lambda:invokeFunction",
"Resource": "arn:aws:lambda:<region>:<account_id>:function:LogToWatch-1WQI7PUOH1EF",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "<account_id>"
}
}
}
]
}
But Lambda expects this in order to show the trigger in the console:
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Sid": "lambda-a5c4fbbd-61fc-4b08-82b4-7dd593c7f65f",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:us-east-1:303769779339:function:test-LogToWatch-1WQI7PUOH1EF",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "303769779339"
},
"ArnLike": {
"AWS:SourceArn": "<S3 bucket Arn>"
}
}
}
]
}
It is missing this part:
"ArnLike": {
"AWS:SourceArn": "<S3 bucket Arn>"
}
In the cloudformation template, the AWS::Lambda::Permission resource after transformed looks like this:
"LogToWatchS3CreateObjectPermission": {
"Type": "AWS::Lambda::Permission",
"Properties": {
"Action": "lambda:invokeFunction",
"SourceAccount": {
"Ref": "AWS::AccountId"
},
"FunctionName": {
"Ref": "LogToWatch"
},
"Principal": "s3.amazonaws.com"
}
}
It is missing the SourceArn property. It should be something like this:
"LogToWatchS3CreateObjectPermission": {
"Type": "AWS::Lambda::Permission",
"Properties": {
"Action": "lambda:invokeFunction",
"SourceAccount": {
"Ref": "AWS::AccountId"
},
"FunctionName": {
"Ref": "LogToWatch"
},
"Principal": "s3.amazonaws.com",
"SourceArn": {
"Fn::GetAtt": "TargetBucket.Arn"
}
}
}
However the issue is that, currently SAM makes the permission not scoped to a specific bucket. If we are to fix it in SAM making it scoped to a specific bucket so that it can show up properly in Lambda console, this could potentially break existing customer who expects broader permissions.
Just wondering why this is closed? I'm seeing the same issue. The bucket event source does not show up in the console. It makes it very confusing to know what is going on.
If you use the S3 event in SAM, then you don't see any trigger in the Lambda configuration panel. But the Lambda function is executed when you drop a file in the S3 bucket.
I get this problem too, when I deploy app to the aws, I config the s3 trigger, but I don't see it on the aws console, I also tried the dynamodb or api gateway trigger and they can show on the aws console. I don't know why?
Another question is I can't set the bucket name when I config s3 event; I find the older doc config they can set the s3 bucket name. Is there something different between the old and now version ?
Currently experiencing this issue, any developments?
@gartlady could you provide any additional information? The S3 triggers still don't show up in the lambda console, but should still work. Are you saying that you can't see the trigger or that the trigger doesn't work?
The S3 triggers are working but they are not appearing in the lambda console.
i am experiencing similar issue with SQS events. using SAM to deploy lambda with SQS event the lambda receives messages from the queue but the trigger is not visible via AWS console.
However the issue is that, currently SAM makes the permission not scoped to a specific bucket. If we are to fix it in SAM making it scoped to a specific bucket so that it can show up properly in Lambda console, this could potentially break existing customer who expects broader permissions.
please note that when using the Events you are expecting SPECIFIC permissions. giving broader permissions is an issue not a feature. when creating a lambda using SAM and giving an S3 as a trigger i expect that only that S3 is able to trigger the lambda. giving broader permissions than that seems unsecured.
Also experiencing the same issue.
Experienced the same issue. I can see no development was done for one year. Do someone knows another way to configure the same configuration (without SAM)?
Experienced the same issue. I can see no development was done for one year. Do someone knows another way to configure the same configuration (without SAM)?
use: Type: AWS::Lambda::EventSourceMapping https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-eventsourcemapping.html
exampleTrigger:
Properties:
BatchSize: 1
Enabled: true
EventSourceArn:
Fn::GetAtt:
- source
- Arn
FunctionName:
Fn::GetAtt:
- CreatedLambda
- Arn
Type: AWS::Lambda::EventSourceMapping
Experienced the same issue. I can see no development was done for one year. Do someone knows another way to configure the same configuration (without SAM)?
use: Type: AWS::Lambda::EventSourceMapping https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-eventsourcemapping.html
exampleTrigger: Properties: BatchSize: 1 Enabled: true EventSourceArn: Fn::GetAtt: - source - Arn FunctionName: Fn::GetAtt: - CreatedLambda - Arn Type: AWS::Lambda::EventSourceMapping
I can see that for event source arn only next values are available
Amazon Kinesis – Default 100. Max 10,000. Amazon DynamoDB Streams – Default 100. Max 1,000. Amazon Simple Queue Service – Default 10. Max 10.
No S3 option.
Experiencing same issue here. How do we open this again?
This is a breaking change, so we wouldn't be able to do this unless we made a new version of SAM. I agree, though, that this should be fixed.
Lu's comment (https://github.com/awslabs/serverless-application-model/issues/300#issuecomment-408950770) describes what needs to be changed.
Same issue here.
~Late to the party, but I think I have the problem with a particular Lambda not showing a Cloudwatch trigger in the console. My other Lambdas (with basically the same trigger) show them in the console.~
~Seems like the Function Policy went missing at some point, although after creating the lambda 2 years ago it was there.~
Actually it was because my trigger was attached to a particular ALIAS that it didn't show up in the console for the "root" Lambda. Switching the version dropdown made it come back. I noticed this when looking at the rules in Cloudwatch, as the rule shows its targets at the bottom of the console, and clicking the target will jump to the correct version.
Does everyone resolve this issue? I've got the same issue. The Cloud formation just only create the permission resource.
Still having the issue as well. The "fix" to specify the Bucket Arn in the Permission is not viable because it will create an implicit loop between the bucket and the permission: bucket needs the permission to create the notification configuration but the permission needs the bucket arn.
@phongtran227 @pierremarieB
You can add a AWS::Lambda::Permission
to your Resources
. It should look something like the following:
LambdaInvokePermission:
Type: 'AWS::Lambda::Permission'
Properties:
FunctionName: !GetAtt MyFunction.Arn
Action: 'lambda:InvokeFunction'
Principal: 's3.amazonaws.com'
SourceAccount: !Sub ${AWS::AccountId}
SourceArn: !GetAtt MyBucket.Arn
That worked for me.
@henrikbjorn I tried your suggestion but it doesn't show the S3 triggers on console. Pretty new to this so using docs and all to find my way. I using SAM CLI to bundle and upload packaged artifact to an S3 bucket. To test I simply pass the S3 URL of uploaded artifact to Lambda function, everything else shows up but no S3 trigger shown on console, no errors either.
@moosakhalid You also need access to the S3 bucket. I found that it is important to specify Version
when specifying policies.
I have only tried where i created the s3 bucket in the same CloudFormation template.
@henrikbjorn thanks for the help and hints. I finally got my SAM deployment to work though thanks to your Lambda Permissions snippet! Your method worked because that's the only way SAM/CFN deployment will let you refer to an S3 bucket i.e. if it's created within same template. It is rather annoying but I guess it's there for a good reason perhaps.
Confirmed that @henrikbjorn Lambda Permissions snippet worked for us too. Thanks!
The @henrikbjorn Lambda Permission snippet wasn't working for me by itself, but then the observation by @moosakhalid that the resources had to be created in the same template triggered an idea (no pun).
Despite having @henrikbjorn snippet, when I tried to add the trigger manually, I got "The provided execution role does not have permissions to call ReceiveMessage on SQS (Service: AWSLambda". Ah ha.
My SQS queue is being created in a different template, and I'm using SAM managed policy SQSPoller in the function Policies statement to allow lambda to read the queue.
However, I note that when you click "Edit Permission Boundary" on the Lambda Application page, the dialog notes:
This policy statement includes ARNs for the resources created in your SAM template. If your application needs access to resources outside of the SAM template, add those resource ARNs to the statement.
Indeed. The Permission Boundary didn't include permissions granted by SQSPoller policy for the Lambda function to access the externally created resource that I am importing with ImportValue.
So in addition to @henrikbjorn snippet, I copied the SQSPoller permissions from the Lambda Execution Role into the PermissionsBoundaryPolicy, et voila.
Now my trigger shows in the console.
This leads me to conclude that the actual bug is the failure of SAM / CloudFormation to error with an explanation that permissions are incomplete. Whereas the Console does give a reasonably clear error that it thinks permissions are inadequate and won't add the trigger until they are fixed.
If you use the S3 event in SAM, then you don't see any trigger in the Lambda configuration panel. But the Lambda function is executed when you drop a file in the S3 bucket.
This saved me a lot of time.. thank you
Quick Update:
The snippet above works but you have to start from scratch. In other words, if you already have a deployed SAM/CloudFormation template, trying to update the existing Lambda trigger config doesn't work.
You have to delete/un-deploy the template from CloudFormation and then redeploy again. Then the trigger will show up.
Update on list-event-source-mappings The reason I wanted to have the trigger showed on the console was that I needed to enable/disable the trigger.
When I run aws lambda list-event-source-mappings
, the S3 event (lambda trigger) does not show up in the results list. Only my SQS events do.
I just deployed the pre defined template for s3 and it did not add the trigger.
For those that are using the s3 template created by the cli, change the template for this one and the s3 trigger will be created. I also deleted the stack created by the old template and recreated the stack with the new one. Thanks @henrikbjorn
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
s3-lambda
Parameters:
AppBucketName:
Type: String
Description: "REQUIRED: Unique S3 bucket name to use for the app."
Resources:
S3JsonLoggerFunction:
Type: AWS::Serverless::Function
Properties:
Handler: src/handlers/s3-json-logger.s3JsonLoggerHandler
Runtime: nodejs12.x
MemorySize: 128
Timeout: 60
Policies:
S3ReadPolicy:
BucketName: !Ref AppBucketName
Events:
S3NewObjectEvent:
Type: S3
Properties:
Bucket: !Ref AppBucket
Events: s3:ObjectCreated:*
Filter:
S3Key:
Rules:
- Name: suffix
Value: ".json"
LambdaInvokePermission:
Type: 'AWS::Lambda::Permission'
Properties:
FunctionName: !GetAtt S3JsonLoggerFunction.Arn
Action: 'lambda:InvokeFunction'
Principal: 's3.amazonaws.com'
SourceAccount: !Sub ${AWS::AccountId}
SourceArn: !GetAtt AppBucket.Arn
AppBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Ref AppBucketName
You can add a
AWS::Lambda::Permission
to yourResources
. It should look something like the following:LambdaInvokePermission: Type: 'AWS::Lambda::Permission' Properties: FunctionName: !GetAtt MyFunction.Arn Action: 'lambda:InvokeFunction' Principal: 's3.amazonaws.com' SourceAccount: !Sub ${AWS::AccountId} SourceArn: !GetAtt MyBucket.Arn
That worked for me.
Thank you.
There should be a note about this in the official docs. Just sent a feedback to them.
Adding LambdaInvokePermission
resource made the trigger show up in Lambda UI. Thanks.
For those that are using the s3 template created by the cli, change the template for this one and the s3 trigger will be created. I also deleted the stack created by the old template and recreated the stack with the new one. Thanks @henrikbjorn
AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: > s3-lambda Parameters: AppBucketName: Type: String Description: "REQUIRED: Unique S3 bucket name to use for the app." Resources: S3JsonLoggerFunction: Type: AWS::Serverless::Function Properties: Handler: src/handlers/s3-json-logger.s3JsonLoggerHandler Runtime: nodejs12.x MemorySize: 128 Timeout: 60 Policies: S3ReadPolicy: BucketName: !Ref AppBucketName Events: S3NewObjectEvent: Type: S3 Properties: Bucket: !Ref AppBucket Events: s3:ObjectCreated:* Filter: S3Key: Rules: - Name: suffix Value: ".json" LambdaInvokePermission: Type: 'AWS::Lambda::Permission' Properties: FunctionName: !GetAtt S3JsonLoggerFunction.Arn Action: 'lambda:InvokeFunction' Principal: 's3.amazonaws.com' SourceAccount: !Sub ${AWS::AccountId} SourceArn: !GetAtt AppBucket.Arn AppBucket: Type: AWS::S3::Bucket Properties: BucketName: !Ref AppBucketName
This doesn't work for me because it complains about circular dependencies:
E3004: Circular Dependencies for resource DynamoS3Function. Circular dependency with
For those that are using the s3 template created by the cli, change the template for this one and the s3 trigger will be created. I also deleted the stack created by the old template and recreated the stack with the new one. Thanks @henrikbjorn
AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: > s3-lambda Parameters: AppBucketName: Type: String Description: "REQUIRED: Unique S3 bucket name to use for the app." Resources: S3JsonLoggerFunction: Type: AWS::Serverless::Function Properties: Handler: src/handlers/s3-json-logger.s3JsonLoggerHandler Runtime: nodejs12.x MemorySize: 128 Timeout: 60 Policies: S3ReadPolicy: BucketName: !Ref AppBucketName Events: S3NewObjectEvent: Type: S3 Properties: Bucket: !Ref AppBucket Events: s3:ObjectCreated:* Filter: S3Key: Rules: - Name: suffix Value: ".json" LambdaInvokePermission: Type: 'AWS::Lambda::Permission' Properties: FunctionName: !GetAtt S3JsonLoggerFunction.Arn Action: 'lambda:InvokeFunction' Principal: 's3.amazonaws.com' SourceAccount: !Sub ${AWS::AccountId} SourceArn: !GetAtt AppBucket.Arn AppBucket: Type: AWS::S3::Bucket Properties: BucketName: !Ref AppBucketName
This doesn't work for me because it complains about circular dependencies:
E3004: Circular Dependencies for resource DynamoS3Function. Circular dependency with
@dani882 Could you post your template so we can take a look at it.
For those that are using the s3 template created by the cli, change the template for this one and the s3 trigger will be created. I also deleted the stack created by the old template and recreated the stack with the new one. Thanks @henrikbjorn
AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: > s3-lambda Parameters: AppBucketName: Type: String Description: "REQUIRED: Unique S3 bucket name to use for the app." Resources: S3JsonLoggerFunction: Type: AWS::Serverless::Function Properties: Handler: src/handlers/s3-json-logger.s3JsonLoggerHandler Runtime: nodejs12.x MemorySize: 128 Timeout: 60 Policies: S3ReadPolicy: BucketName: !Ref AppBucketName Events: S3NewObjectEvent: Type: S3 Properties: Bucket: !Ref AppBucket Events: s3:ObjectCreated:* Filter: S3Key: Rules: - Name: suffix Value: ".json" LambdaInvokePermission: Type: 'AWS::Lambda::Permission' Properties: FunctionName: !GetAtt S3JsonLoggerFunction.Arn Action: 'lambda:InvokeFunction' Principal: 's3.amazonaws.com' SourceAccount: !Sub ${AWS::AccountId} SourceArn: !GetAtt AppBucket.Arn AppBucket: Type: AWS::S3::Bucket Properties: BucketName: !Ref AppBucketName
This doesn't work for me because it complains about circular dependencies:
E3004: Circular Dependencies for resource DynamoS3Function. Circular dependency with
@dani882 Could you post your template so we can take a look at it.
Yes, here is my template. please let me know what I'm missing, I tried to follow this thread but no luck so far @AllanOricil
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Globals:
Function:
Timeout: 3
MemorySize: 128
Environment:
Variables:
JSONBucket:
Ref: JSONBucket
Description: >
SAM Template for cce challenge
Parameters:
S3BucketName:
Type: String
Description: S3 Bucket name
Resources:
MyApi:
Type: AWS::Serverless::HttpApi
Properties:
CorsConfiguration:
AllowMethods:
- GET
- POST
AllowHeaders:
- "*"
AllowOrigins:
- "*"
JSONBucket: # Create S3 bucket to be used for upload and retrieve json files
Type: AWS::S3::Bucket
Properties:
BucketName: !Ref S3BucketName
CorsConfiguration:
CorsRules:
- AllowedHeaders:
- "*"
AllowedMethods:
- GET
- PUT
- HEAD
AllowedOrigins:
- "*"
## Lambda functions
UploadRequestFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: functions/upload_json/
Handler: app.handler
Runtime: nodejs12.x
Timeout: 3
MemorySize: 128
Environment:
Variables:
UploadBucket: !Ref JSONBucket
Policies:
- S3WritePolicy:
BucketName: !Ref JSONBucket
## This permission allows the Lambda function to request signed URLs
## for objects that will be publicly readable. Uncomment if you want this ACL.
- Statement:
- Effect: Allow
Resource: !Sub 'arn:aws:s3:::${JSONBucket}/'
Action:
- s3:putObjectAcl
Events:
UploadAssetAPIEvent:
Type: HttpApi
Properties:
Path: /uploads
Method: get
ApiId: !Ref MyApi
DynamoS3Function:
Type: AWS::Serverless::Function
Properties:
CodeUri: functions/retrieve_json/
Runtime: python3.8
Handler: lambda_function.lambda_handler
Policies:
- DynamoDBWritePolicy:
TableName:
Ref: DynamoDBTable
- S3ReadPolicy:
BucketName: !Ref: S3BucketName
- Statement:
- Effect: Allow
Resource: !Sub 'arn:aws:s3:::${JSONBucket}/'
Action:
- s3:GetObject
Events:
S3NewObjectEvent:
Type: S3
Properties:
Bucket: !Ref JSONBucket
Events: s3:ObjectCreated:*
Filter:
S3Key:
Rules:
- Name: suffix
Value: ".json"
LambdaInvokePermission:
Type: 'AWS::Lambda::Permission'
Properties:
FunctionName: !GetAtt DynamoS3Function.Arn
Action: 'lambda:InvokeFunction'
Principal: 's3.amazonaws.com'
SourceAccount: !Sub ${AWS::AccountId}
SourceArn: !GetAtt JSONBucket.Arn
DynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: Brands
AttributeDefinitions:
-
AttributeName: id
AttributeType: "N"
-
AttributeName: name
AttributeType: "S"
KeySchema:
- AttributeName: id
KeyType: HASH
- AttributeName: name
KeyType: RANGE
BillingMode: PAY_PER_REQUEST
PointInTimeRecoverySpecification:
PointInTimeRecoveryEnabled: true
Outputs:
APIEndpoint:
Description: "HTTP API endpoint URL"
Value: !Sub "https://${MyApi}.execute-api.${AWS::Region}.amazonaws.com"
For those that are using the s3 template created by the cli, change the template for this one and the s3 trigger will be created. I also deleted the stack created by the old template and recreated the stack with the new one. Thanks @henrikbjorn
AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: > s3-lambda Parameters: AppBucketName: Type: String Description: "REQUIRED: Unique S3 bucket name to use for the app." Resources: S3JsonLoggerFunction: Type: AWS::Serverless::Function Properties: Handler: src/handlers/s3-json-logger.s3JsonLoggerHandler Runtime: nodejs12.x MemorySize: 128 Timeout: 60 Policies: S3ReadPolicy: BucketName: !Ref AppBucketName Events: S3NewObjectEvent: Type: S3 Properties: Bucket: !Ref AppBucket Events: s3:ObjectCreated:* Filter: S3Key: Rules: - Name: suffix Value: ".json" LambdaInvokePermission: Type: 'AWS::Lambda::Permission' Properties: FunctionName: !GetAtt S3JsonLoggerFunction.Arn Action: 'lambda:InvokeFunction' Principal: 's3.amazonaws.com' SourceAccount: !Sub ${AWS::AccountId} SourceArn: !GetAtt AppBucket.Arn AppBucket: Type: AWS::S3::Bucket Properties: BucketName: !Ref AppBucketName
This doesn't work for me because it complains about circular dependencies:
E3004: Circular Dependencies for resource DynamoS3Function. Circular dependency with
@dani882 Could you post your template so we can take a look at it.
Yes, here is my template. please let me know what I'm missing, I tried to follow this thread but no luck so far @AllanOricil
AWSTemplateFormatVersion: "2010-09-09" Transform: AWS::Serverless-2016-10-31 Globals: Function: Timeout: 3 MemorySize: 128 Environment: Variables: JSONBucket: Ref: JSONBucket Description: > SAM Template for cce challenge Parameters: S3BucketName: Type: String Description: S3 Bucket name Resources: MyApi: Type: AWS::Serverless::HttpApi Properties: CorsConfiguration: AllowMethods: - GET - POST AllowHeaders: - "*" AllowOrigins: - "*" JSONBucket: # Create S3 bucket to be used for upload and retrieve json files Type: AWS::S3::Bucket Properties: BucketName: !Ref S3BucketName CorsConfiguration: CorsRules: - AllowedHeaders: - "*" AllowedMethods: - GET - PUT - HEAD AllowedOrigins: - "*" ## Lambda functions UploadRequestFunction: Type: AWS::Serverless::Function Properties: CodeUri: functions/upload_json/ Handler: app.handler Runtime: nodejs12.x Timeout: 3 MemorySize: 128 Environment: Variables: UploadBucket: !Ref JSONBucket Policies: - S3WritePolicy: BucketName: !Ref JSONBucket ## This permission allows the Lambda function to request signed URLs ## for objects that will be publicly readable. Uncomment if you want this ACL. - Statement: - Effect: Allow Resource: !Sub 'arn:aws:s3:::${JSONBucket}/' Action: - s3:putObjectAcl Events: UploadAssetAPIEvent: Type: HttpApi Properties: Path: /uploads Method: get ApiId: !Ref MyApi DynamoS3Function: Type: AWS::Serverless::Function Properties: CodeUri: functions/retrieve_json/ Runtime: python3.8 Handler: lambda_function.lambda_handler Policies: - DynamoDBWritePolicy: TableName: Ref: DynamoDBTable - S3ReadPolicy: BucketName: !Ref: S3BucketName - Statement: - Effect: Allow Resource: !Sub 'arn:aws:s3:::${JSONBucket}/' Action: - s3:GetObject Events: S3NewObjectEvent: Type: S3 Properties: Bucket: !Ref JSONBucket Events: s3:ObjectCreated:* Filter: S3Key: Rules: - Name: suffix Value: ".json" LambdaInvokePermission: Type: 'AWS::Lambda::Permission' Properties: FunctionName: !GetAtt DynamoS3Function.Arn Action: 'lambda:InvokeFunction' Principal: 's3.amazonaws.com' SourceAccount: !Sub ${AWS::AccountId} SourceArn: !GetAtt JSONBucket.Arn DynamoDBTable: Type: AWS::DynamoDB::Table Properties: TableName: Brands AttributeDefinitions: - AttributeName: id AttributeType: "N" - AttributeName: name AttributeType: "S" KeySchema: - AttributeName: id KeyType: HASH - AttributeName: name KeyType: RANGE BillingMode: PAY_PER_REQUEST PointInTimeRecoverySpecification: PointInTimeRecoveryEnabled: true Outputs: APIEndpoint: Description: "HTTP API endpoint URL" Value: !Sub "https://${MyApi}.execute-api.${AWS::Region}.amazonaws.com"
I was able to fix the problem by removing the Globals section, not sure why doesn't work with that but it is working now after that and adding the LambdaInvokePermission
SAM CLI, version 1.27.2
cfn-lint 0.48.3
I am still not able to get this working. Have followed the complete thread. Even removing the Global Section did not help. cfn-lint continue to report the following error.
[cfn-lint] E3004: Circular Dependencies for resource **Bucket. Circular dependency with [**S3EventPermission]
Is there some clean alternative, as cfn-lint cannot be skipped from the deployment pipeline.
I am having same issue and cannot create a new AWS::Serverless::Function
with S3 trigger. My build pipeline fails at the create changeset
step.
fwiw I'm using the exact same code to successfully create multiple other serverless functions with other types of triggers.
Clicking on the "Details" link in the pipeline yields nothing.
MediaBucketObjectCreatedFunction:
Type: 'AWS::Serverless::Function'
Properties:
CodeUri: '../dist/lambda/'
Events:
ObjectCreatedEvent:
Type: 'S3'
Properties:
Bucket: !Ref MediaS3Bucket
Events: 's3:ObjectCreated:*'
Handler: 'users.media_bucket'
Role: !GetAtt AppLambdaRole.Arn
Runtime: 'python3.8'
Timeout: 45
For anyone who ends up here and can't get it working, I was able to get it working by adding the invoke permission as stated above AND adding the lambda execution role to the s3 bucket policy:
MediaBucketS3Policy:
Type: 'AWS::S3::BucketPolicy'
Properties:
Bucket: !Ref MediaS3Bucket
PolicyDocument:
Statement:
- Action:
- 's3:*'
Effect: 'Allow'
Resource: !Sub 'arn:aws:s3:::${MediaS3Bucket}/*'
Principal:
AWS: !GetAtt AppLambdaRole.Arn
and
LambdaInvokePermission:
Type: 'AWS::Lambda::Permission'
Properties:
FunctionName: !GetAtt MediaBucketObjectCreatedFunction.Arn
Action: 'lambda:InvokeFunction'
Principal: 's3.amazonaws.com'
SourceAccount: !Ref 'AWS::AccountId'
SourceArn: !GetAtt MediaS3Bucket.Arn
To AWS, the bug still exists that the cloudformation stack does not show any errors when create changeset fails and thus can leave a user completely lost as to what the problem is.
I had the same thing happen to me in front of my co-workers. I was trying to show them how easy sam cli was and ran the sample s3 application. When I went to the console to show the trigger, it was missing. Needless to say they were not impressed. The solution was to create the invoke permission but I had to dredge the internet for the answer. I am running SAM CLI, version 1.33.0. Honestly this should be fixed in the sam template at init time or documented somewhere? This is 3 years old!
fyi I ran into the circular dependency issue @dani882 ran into above and my codepipeline became stuck for the past 2 weeks as my create changeset action never moved past "CREATE_IN_PROGRESS" until it finally failed weeks later.
not sure what the best way to get past this is but going to try creating a 2nd s3 bucket w/explicit name as documented https://aws.amazon.com/blogs/infrastructure-and-automation/handling-circular-dependency-errors-in-aws-cloudformation/
Though we are not changing the existing behaviour, there is a simple way to deal with the problem gracefully.
First, let me briefly explain why we have not modified the existing Lambda resource policy and are not planning to do so. As mentioned above, the problem with Console not showing the trigger comes from the fact that the resource policy which is created by SAM on S3 Event generation does not restrict Lambda access to a single bucket. If we change the policy now, it will break working code for the customers who already rely on broader permissions, as mentioned in the referenced explanation and here.
Second, I'd like to recommend the way to narrow down the permissions so they will work with Console, avoid the Circular Dependency pitfall, and reduce the boilerplate. It is based on the ideas many of you have already figured out and it leverages SAM Connector resource we have recently introduced.
Instead of crafting AWS::Lambda::Permission
use AWS::Serverless::Connector
which we introduced in September 2022. You can read more on Connectors here
To guarantee that there is no circular dependency, hardcode your bucket name.
Here is an example, based on the one which was submitted when the issue was open
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Example
Resources:
LogToWatch:
Type: AWS::Serverless::Function
Properties:
Runtime: nodejs16.x
Handler: index.handler
InlineCode: |
exports.handler = async (event) => {
console.log(event);
};
Timeout: 300
Policies: AmazonS3ReadOnlyAccess
Events:
S3CreateObject:
Type: S3
Properties:
Bucket:
Ref: TargetBucket
Events: s3:ObjectCreated:Put
TargetBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: target-bucket-for-test-102546
MyConnector:
Type: AWS::Serverless::Connector
Properties:
Source:
Type: AWS::S3::Bucket
Arn: !Sub arn:${AWS::Partition}:s3:::target-bucket-for-test-102546
Destination:
Id: LogToWatch
Permissions:
- Write
Notice how we have to set bucket name explicitly
Properties:
BucketName: target-bucket-for-test-102546
And then reference Connector source by the ARN
Source:
Type: AWS::S3::Bucket
Arn: !Sub arn:${AWS::Partition}:s3:::target-bucket-for-test-102546
P.S
If you don't have to stick to AmazonS3ReadOnlyAccess
for the compatibility reasons, you can use another connector instead of it.
MyConnector2:
Type: AWS::Serverless::Connector
Properties:
Source:
Id: LogToWatch
Destination:
Type: AWS::S3::Bucket
Arn: !Sub arn:${AWS::Partition}:s3:::target-bucket-for-test-102546
Permissions:
- Read
Notice that Source
and Destination
have exchanged their places and we require Read
permissions for Lambda to S3 access.
A connector will result in a more granular policy generated. Compare this one generated by connector:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectLegalHold",
"s3:GetObjectRetention",
"s3:GetObjectTorrent",
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl",
"s3:GetObjectVersionForReplication",
"s3:GetObjectVersionTorrent",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:ListBucketVersions",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::target-bucket-for-test-102546",
"arn:aws:s3:::target-bucket-for-test-102546/*"
],
"Effect": "Allow"
}
]
}
To the one from AmazonS3ReadOnlyAccess
:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3-object-lambda:Get*",
"s3-object-lambda:List*"
],
"Resource": "*"
}
]
}
@ssenchenko thanks for workaround, but it effectively defeats the purpose/spirit of AWS SAM of clear mechanism of defining events for lambda functions: somehow we/users need to know to whip up AWS::Serverless::Connector and repeat that for every bucket!, and it does not work for already deployed function (as AWS does not detect drift of Resource-based policy statements?).
Seem there a clear way to make this work without breaking existing customers: update Transform: AWS::Serverless-2016-10-31 version and generate proper permissions.
Hi, I am facing an issue where Event is not being created and associated with Lambda, although it is specified in SAM: