Closed petemounce closed 8 years ago
I've just adjusted the IAM permissions to allow s3:*
on the bucket (as below) but I get the same log output.
{"Statement":[{"Resource":["arn:aws:s3:::domain.com","arn:aws:s3:::domain.com/","arn:aws:s3:::domain.com/*"],"Action":["s3:*"],"Effect":"Allow"},{"Resource":["arn:aws:s3:::*"],"Action":["s3:GetBucketLocation","s3:ListAllMyBuckets"],"Effect":"Allow"},{"Resource":["arn:aws:s3:::domain.com"],"Action":["s3:ListBucket"],"Effect":"Allow"},{"Resource":["arn:aws:s3:::domain.com/*"],"Action":["s3:DeleteObject","s3:GetObject","s3:PutObject"],"Sid":"AllowS3WebsiteThings","Effect":"Allow"}],"Version":"2012-10-17"}
Should I, even though I'm not using it, give cloudfront permissions?
If you have not defined a CloudFront distribution id in your config file, you don't need to declare any CF permissions.
The output log inplies that you have thousands of files on your S3 bucket. I wonder if that has something to do with the error. s3_website should support an infinite amount of files. It should also echo AWS permission errors if it encounters any. I'm out of good guesses here.
@petemounce I just added a debugging guide. It contains instructions on how to log the AWS operations.
Thanks @laurilehmijoki - I'll try that out and get back to you. There are indeed thousands of files in the bucket - small files of static html and images accumulated over around 9 years of daily blogging.
I've tried that, and now I get output like
> bundle exec ruby ./bin/s3_website cfg apply --no-autocreate-cloudfront-dist --config-dir <correct config dir path>
Applying the configurations in s3_website.yml on the AWS services ...
Bucket <correct bucket> now functions as a website
AWS API call failed:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>[some request id]</RequestId><HostId>[some host id]</HostId></Error> (GenericError)
I've tried a src/main/resources/log4j.properties
with contents:
log4j.rootLogger=DEBUG, A1
log4j.appender.A1=org.apache.log4j.ConsoleAppender
log4j.appender.A1.layout=org.apache.log4j.PatternLayout
log4j.appender.A1.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
# Or you can explicitly enable WARN and ERROR messages for the AWS Java clients
log4j.logger.com.amazonaws=DEBUG
log4j.logger.com.amazonaws.request=DEBUG
log4j.logger.org.apache.http.wire=DEBUG
and nothing makes it output more.
I've done this in both a clone of this repo and within <ruby install's gems directory>/src/main/resources/log4j.properties
- neither edits make a difference.
I haven't tried print
in the scala - wouldn't really know where to start. I had a look to see if I could find the creation of AWS clients, but so far haven't succeeded.
The IAM in effect (in addition to the out of the box reader policy):
"AuthorPolicy": {
"Type": "AWS::IAM::ManagedPolicy",
"Properties": {
"Description": "Website Authors",
"Roles": [
{
"Ref": "AuthorsRole"
}
],
"Groups": [
{"Ref":"AuthorsGroup"}
],
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": [
{
"Fn::Join": [
"",
[
"arn:aws:s3:::*"
]
]
}
]
},
{
"Sid": "FindBuckets",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
{
"Fn::Join": [
"",
["arn:aws:s3:::","www.",{"Ref": "ApexDomain"}]
]
},
{
"Fn::Join": [
"",
["arn:aws:s3:::",{"Ref": "LogsBucketName"}]
]
}
]
},
{
"Sid": "AllowS3WebsiteAdmin",
"Effect": "Allow",
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutBucketWebsite",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
{
"Fn::Join": [
"",
["arn:aws:s3:::","www.",{"Ref": "ApexDomain"}]
]
},
{
"Fn::Join": [
"",
["arn:aws:s3:::","www.",{"Ref": "ApexDomain"},"/*"]
]
}
]
},
{
"Sid": "AllowS3WebsiteLogsRetrieval",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
{
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "LogsBucketName"
},
"/logs/static/",
"www.",
{
"Ref": "ApexDomain"
},
"/s3-access/*"
]
]
}
]
}
]
}
}
}
Here s3_website resolves the objects on your S3 bucket. Here is the line that prints [debg] Querying more S3 files
into console.
Try adding print(reports)
to https://github.com/laurilehmijoki/s3_website/blob/100e2195f9deefbaf4a8382f63818bfc0f6dcfea/src/main/scala/s3/website/Push.scala#L105 – see anything suspicious?
Any chance you could share the website data with me? I could try to debug it locally on my machine.
Closing as inactive. Please reopen if needed.
I've started to use s3_website - thanks for writing it!
I have a problem, though. The initial push went fine. However, now when I've made an edit to a file that I've pushed and try to push again, I get a failure. This happens both with and without
--dry-run
option.Could the log output please be made verbose so that one can see the underlying API calls being made for situations like these?
I think my only workaround is to
--force
push every time, now - that makes my workflow much longer (and causes AWS to charge me more for the traffic).I've attached the versions of the software that I'm using, the config file minus secrets, the log output, and the IAM permission set applied to the user.
software
ruby:
2.3.0
s3_website:2.12.3
config file
.aws/s3_website.yml
(with identifying bits and secrets changed):log output:
IAM permission set that the user has: