I am following the example outlined here but have modified this to work for a .net core solution vs. a node solution. I have added --debug to the cloudformation package params and am getting Access denied on the s3 resources.
the AWS command I am running is as follows:
aws cloudformation package --template-file ./src/myprojectname/deploymentTemplate.yaml --s3-bucket project-dev-lambastore --s3-prefix myprojectname --output-template-file outputSamTemplate.yaml --debug
do I need to be pulling the s3 prefix and s3bucket from the pipeline information? if so, how do I do that? Looks like the IAM role has the correct s3 bucket rights as well.
I am following the example outlined here but have modified this to work for a .net core solution vs. a node solution. I have added --debug to the cloudformation package params and am getting Access denied on the s3 resources.
information from log file:
2018-08-17 20:10:04,490 - ThreadPoolExecutor-0_0 - botocore.hooks - DEBUG - Event request-created.s3.PutObject: calling handler <function signal_transferring at 0x0000000002A34518> 2018-08-17 20:10:04,490 - ThreadPoolExecutor-0_0 - botocore.endpoint - DEBUG - Sending http request: <PreparedRequest [PUT]> 2018-08-17 20:10:04,490 - ThreadPoolExecutor-0_0 - botocore.vendored.requests.packages.urllib3.connectionpool - INFO - Starting new HTTPS connection (2): s3.us-east-2.amazonaws.com 2018-08-17 20:10:04,505 - ThreadPoolExecutor-0_0 - botocore.awsrequest - DEBUG - Waiting for 100 Continue response. 2018-08-17 20:10:04,535 - ThreadPoolExecutor-0_0 - botocore.awsrequest - DEBUG - Received a non 100 Continue response from the server, NOT sending request body. 2018-08-17 20:10:04,535 - ThreadPoolExecutor-0_0 - botocore.vendored.requests.packages.urllib3.connectionpool - DEBUG - "PUT /cliq-dev-lambastore/Cliq.AQE.TeamServices/fcf7bab3b970b4e4a98717b68e7e6379 HTTP/1.1" 403 None 2018-08-17 20:10:04,536 - ThreadPoolExecutor-0_0 - botocore.parsers - DEBUG - Response headers: {'x-amz-id-2': 'y6UqWyz3hDgDMkeUNyl+slDXhziw6YK9PztsPGSMqycticL1Qu+MyokcuaLAoO0vYHtgMFKZRt4=', 'server': 'AmazonS3', 'transfer-encoding': 'chunked', 'connection': 'close', 'x-amz-request-id': 'AE69316C0BC05854', 'date': 'Fri, 17 Aug 2018 20:10:04 GMT', 'content-type': 'application/xml'} 2018-08-17 20:10:04,536 - ThreadPoolExecutor-0_0 - botocore.parsers - DEBUG - Response body: <?xml version="1.0" encoding="UTF-8"?>
AccessDenied
2018-08-17 20:10:04,536 - ThreadPoolExecutor-0_0 - botocore.hooks - DEBUG - Event needs-retry.s3.PutObject: calling handler <botocore.retryhandler.RetryHandler object at 0x000000000388DB00> 2018-08-17 20:10:04,536 - ThreadPoolExecutor-0_0 - botocore.retryhandler - DEBUG - No retry needed. 2018-08-17 20:10:04,536 - ThreadPoolExecutor-0_0 - botocore.hooks - DEBUG - Event needs-retry.s3.PutObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x000000000388DB38>> 2018-08-17 20:10:04,536 - ThreadPoolExecutor-0_0 - botocore.hooks - DEBUG - Event after-call.s3.PutObject: calling handler <function enhance_error_msg at 0x0000000002F9E898> 2018-08-17 20:10:04,538 - ThreadPoolExecutor-0_0 - s3transfer.tasks - DEBUG - Exception raised. Traceback (most recent call last): File "s3transfer\tasks.pyc", line 126, in call File "s3transfer\tasks.pyc", line 150, in _execute_main File "s3transfer\upload.pyc", line 692, in _main File "botocore\client.pyc", line 324, in _api_call File "botocore\client.pyc", line 622, in _make_api_call ClientError: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied 2018-08-17 20:10:04,538 - ThreadPoolExecutor-0_0 - s3transfer.utils - DEBUG - Releasing acquire 0/None 2018-08-17 20:10:04,549 - MainThread - awscli.customizations.cloudformation.artifact_exporter - DEBUG - Unable to export Traceback (most recent call last): File "awscli\customizations\cloudformation\artifact_exporter.pyc", line 253, in export File "awscli\customizations\cloudformation\artifact_exporter.pyc", line 274, in do_export File "awscli\customizations\cloudformation\artifact_exporter.pyc", line 142, in upload_local_artifacts File "awscli\customizations\cloudformation\artifact_exporter.pyc", line 156, in zip_and_upload File "awscli\customizations\s3uploader.pyc", line 130, in upload_with_dedup File "awscli\customizations\s3uploader.pyc", line 109, in upload ClientError: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied 2018-08-17 20:10:04,549 - MainThread - awscli.clidriver - DEBUG - Exception caught in main() Traceback (most recent call last): File "awscli\clidriver.pyc", line 208, in main File "awscli\clidriver.pyc", line 345, in call File "awscli\customizations\commands.pyc", line 187, in call File "awscli\customizations\cloudformation\package.pyc", line 138, in _run_main File "awscli\customizations\cloudformation\package.pyc", line 154, in _export File "awscli\customizations\cloudformation\artifact_exporter.pyc", line 450, in export File "awscli\customizations\cloudformation\artifact_exporter.pyc", line 261, in export ExportFailedError: Unable to upload artifact None referenced by CodeUri parameter of AspNetCoreFunction resource. An error occurred (AccessDenied) when calling the PutObject operation: Access Denied 2018-08-17 20:10:04,549 - MainThread - awscli.clidriver - DEBUG - Exiting with rc 255
Checking the rights on the IAM role that is executing this I see the following.
{ "Action": [ "s3:GetObject", "s3:GetObjectVersion", "s3:GetBucketVersioning" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "s3:PutObject" ], "Resource": [ "arn:aws:s3:::codepipeline*", "arn:aws:s3:::elasticbeanstalk*" ], "Effect": "Allow" }
the AWS command I am running is as follows:
aws cloudformation package --template-file ./src/myprojectname/deploymentTemplate.yaml --s3-bucket project-dev-lambastore --s3-prefix myprojectname --output-template-file outputSamTemplate.yaml --debug
AWSCLI info:
aws-cli/1.15.80 Python/3.7.0 Windows/10 botocore/1.10.79
do I need to be pulling the s3 prefix and s3bucket from the pipeline information? if so, how do I do that? Looks like the IAM role has the correct s3 bucket rights as well.