Closed rnicholus closed 8 years ago
@rbliss I remember an issue with all x-amz headers, not just the x-amz-acl. To your knowledge, the issue is on Amazon's side, is that correct? Do they have any idea when this will be fixed?
@rnicholus It appears to be working for other x-amz
headers, I'll test more and see how it goes.
I've been testing the x-amz-meta-####
headers, and they're working great. Other headers appear to come through fine too.
After further discussion with the guys at Amazon, it appears the specific issue I was having, where the headers don't appear to come through, is really due to setting up an 'Origin Access Identity' (OAI) on Cloudfront. You have to be very careful when you setup an OAI to understand what's going on. An OAI will cause Cloudfront to behave almost as if a different user (the OAI) is creating the S3 object, rather than the user whose access key you specify in FineUploader.
An OAI is optional, but If using an OAI, you should use the acl of bucket-owner-full-control
instead of private
to get an S3 object that resembles what would normally happen if you hit S3 directly.
Other than that, I'm happy with the results I'm seeing from Cloudfront and it appears the earlier issues are no longer a problem.
My understanding was that an OAI is required when making signed requests, otherwise the additional headers attached to the request by CF will be rejected by S3 as they are not accounted for in the signature.
In the case of "bucket-owner-full-control", I'm guessing the "owner" is CF when using an OAI. Is this your understanding? If this is true, then based on the Canned ACL descriptions, public-read will result in the same problem for the bucket owner it seems.
Just a point of clarification: by signed requests, are you referring to the signed Authorization header required to authenticate a request in the S3 REST API, or signed urls used to restrict access to objects in S3?
The former. This is how Fine Uploader signs all upload requests.
On Mon, May 18, 2015 at 4:05 PM, Richard Bliss notifications@github.com wrote:
Just a point of clarification: by signed requests, are you referring to the signed Authorization header required to authenticate a request in the S3 REST API, or signed urls used to restrict access to objects in S3?
— Reply to this email directly or view it on GitHub https://github.com/FineUploader/fine-uploader/issues/1016#issuecomment-103210881 .
Just wanted to double check. You do not need an OAI to send signed requests. In fact, if you turn off the OAI (or never set one up) on your Cloudfront distro, everything should just work like you were interacting with S3 directly.
That wasn't my experience before. In fact, the use of OAIs in this context was discussed on the AWS forums. One example can be found at https://forums.aws.amazon.com/thread.jspa?messageID=345913񔜹. I'll have to try again without an OAI at some point.
On Mon, May 18, 2015 at 4:24 PM, Richard Bliss notifications@github.com wrote:
Just wanted to double check. You do not need an OAI to send signed requests. In fact, if you turn off the OAI (or never set one up) on your Cloudfront distro, everything should just work like you were interacting with S3 directly.
— Reply to this email directly or view it on GitHub https://github.com/FineUploader/fine-uploader/issues/1016#issuecomment-103216473 .
@rbliss Have you had a chance to measure what kind of performance or stability gains you are seeing with uploads to Cloudfront vs. uploads to S3?
@jasonshah I haven't had a chance. Anecdotally, it does seem faster @rnicholus Definitely give it a shot without OAI. Life will be glorious.
I do have to warn anyone attempting to use Cloudfront, Cloudfront is a bit fiddly in terms of configuration. If you're having issues, it's likely with how you've setup Cloudfront. I can write up configuration details if anyone is testing this.
I've been following this thread for a while with great interest. It sounds as if this is now working, but it's not totally clear what the procedure is to set CF up to use with FineUploader. Will this be documented?
Once this is tested and verified on our end, we'll update the S3 Uploads feature page with CF-specific details.
Here are my notes on setting up a proper CF distribution:
Origin Settings
No
see Notes on OAI below.Default Cache Behavior Settings
GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE
else you can't create an S3 object.Yes
, else chunked uploads will fail.No
else you'll have to do a lot of tweaking to send a signed Cloudfront URL in addition to all the S3 url parameters.Notes on OAI: If you do setup an OAI, make sure to set FineUploader's objectProperties.acl
to bucket-owner-full-control
or appropriate equivalent. A Cloudfront distribution with an OAI setup on an origin pointing to a S3 bucket will cause Cloudfront to act as the OAI user when interacting with the S3 bucket regardless of the access key specified in FineUploader. Hence any objects with an acl of private
will belong to Cloudfront's OAI user and not the access key's user.
My bad, typo. @rbliss I confirm that with your informations, the upload is going smoothly through Cloudfront to S3. Thank you very much.
Sounds like this is finally working with Fine Uploader S3. I'm continually busy with support and licensing requests, as well as the S3 V4 support feature, but it is still on my radar to verify and update the docs.
On Mon, Oct 5, 2015 at 9:36 AM Ludovic Fleury notifications@github.com wrote:
My bad, typo. @rbliss https://github.com/rbliss I confirm that with your information, the upload is going smoothly through Cloudfront to S3. Thank you very much.
— Reply to this email directly or view it on GitHub https://github.com/FineUploader/fine-uploader/issues/1016#issuecomment-145548484 .
I don't know if it's the right place to ask, but maybe your answers can help other people stuck like me.
I have set up a basic uploader page, S3 + CORS and works as expected.
When I switch to the CDN upload, the response is:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>MethodNotAllowed</Code>
<Message>The specified method is not allowed against this resource.</Message>
<Method>POST</Method>
<ResourceType>OBJECT</ResourceType>
<RequestId>2B0507CBD403F9C3</RequestId>
<HostId>nEV/gHkAE3wTCKP+ZJzV/VQXNEvFP+/w2UZtyIwTrkkV+/i9aFK/Ap8qTfGfDZ+PAl2kAYD7vE4=</HostId>
</Error>
The response header are:
Access-Control-Allow-Method POST, PUT, DELETE
Access-Control-Allow-Origin *
Access-Control-Max-Age 3000
Allow GET, DELETE, HEAD, PUT
Connection keep-alive
Content-Type application/xml
Server AmazonS3
Transfer-Encoding chunked
Vary Origin, Access-Control-Request-Headers, Access-Control-Request-Method
Via 1.1 5d53b9570a535c2d94ce93c20abbd471.cloudfront.net (CloudFront)
X-Amz-Cf-Id dw-N_pLmOmEqlbri-B2l7l2a6TGWm2_tAhr5y9_InMU8ZuHYunZSEw==
X-Cache Error from cloudfront
Below the default behavior configuration:
No OAI set.
What I have changed into fineuploader conf is:
request: {
endpoint: 'dqu6rri____.cloudfront.net',
},
objectProperties: {
bucket: 'fineup____-test'
},
Bucket policy:
{
"Version": "2012-10-17",
"Id": "Policy1447166234666",
"Statement": [
{
"Sid": "Stmt1447166228927",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::fineup____-test/*"
}
]
}
Cors:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<ExposeHeader>ETag</ExposeHeader>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
The bucket is located in Ireland.
I'm quite clueless, so any help on how to track down the problem would help.
Youlll need to fix your comment above. Take another look.
Either way, this is a question for the AWS forums, as it appears to be some type of configuration issue on your end.
Dear @rnicholus if you think this is the wrong place I can remove it in order to not clutter the issue's conversation.
@andreij Place it in AWS Cloudfront forums and I'll respond.
@rbliss Thanks! Here the follow up https://forums.aws.amazon.com/thread.jspa?threadID=219620
@rnicholus @rbliss I'm getting an error when I try to upload large files to S3 through Cloudfront:
<Error>
<Code>AccessDenied</Code>
<Message>There were headers present in the request which were not signed</Message>
<HeadersNotSigned>x-amz-cf-id</HeadersNotSigned>
<RequestId>123</RequestId>
<HostId>abc=</HostId>
</Error>
var endpoint = 'mydomain.com';
var uploader = new qq.s3.FineUploader({
// debug: true,
element: el,
request: {
endpoint: 'https://' + endpoint,
accessKey: s3AccessKey,
params: metadata
},
chunking: {
enabled: true
},
objectProperties: {
bucket: bucket,
host: endpoint,
region: 'ap-southeast-2',
serverSideEncryption: true
},
signature: {
endpoint: domain + '/file-uploads/sign-request',
version: 4,
customHeaders: customHeaders
},
Resources:
Distribution:
Type: "AWS::CloudFront::Distribution"
Properties:
DistributionConfig:
Aliases:
- !Sub files.${HostedZone}
DefaultCacheBehavior:
AllowedMethods:
- "DELETE" - "GET" - "HEAD" - "OPTIONS" - "PATCH" - "POST" - "PUT"
CachedMethods:
- "GET" - "HEAD" - "OPTIONS"
Compress: true
ForwardedValues:
QueryString: true
TargetOriginId: "Upload Bucket"
ViewerProtocolPolicy : "redirect-to-https"
Enabled: true
HttpVersion: "http2"
Origins:
- DomainName: !Sub ${BucketName}.s3-ap-southeast-2.amazonaws.com
Id: "Upload Bucket"
CustomOriginConfig:
HTTPPort: 80
HTTPSPort: 443
OriginProtocolPolicy: https-only
ViewerCertificate:
AcmCertificateArn: !Ref Certificate
SslSupportMethod: sni-only
Bucket:
Type: "AWS::S3::Bucket"
Properties:
BucketName: !Ref BucketName
CorsConfiguration:
CorsRules:
- AllowedOrigins:
- "*"
AllowedHeaders:
- "*"
AllowedMethods:
- "GET" - "POST" - "PUT" - "DELETE"
ExposedHeaders:
- "Date"
- "ETag"
S3BucketPolicy:
Type: "AWS::S3::BucketPolicy"
DependsOn: Bucket
Properties:
Bucket:
!Ref BucketName
PolicyDocument:
Statement:
- Sid: DenyIncorrectEncryptionHeader
Action:
- "s3:PutObject"
Effect: "Deny"
Resource: !Sub arn:aws:s3:::${BucketName}/*
Principal: "*"
Condition:
StringNotEquals:
s3:x-amz-server-side-encryption:
- "AES256"
- Sid: DenyUnEncryptedObjectUploads
Action:
- "s3:PutObject"
Effect: "Deny"
Resource: !Sub arn:aws:s3:::${BucketName}/*
Principal: "*"
Condition:
StringEquals:
s3:x-amz-server-side-encryption:
- !Ref AWS::NoValue
Okay... I changed
- DomainName: !Sub ${BucketName}.s3-ap-southeast-2.amazonaws.com
to
- DomainName: !Sub ${BucketName}.s3.amazonaws.com
(removed the region) and ...I've moved onto the next error: The request signature we calculated does not match the signature you provided. Check your key and signing method.
The error says that the CanonicalRequest included host:my-bucket.s3.amazonaws.com
...oh - objectProperties.host
should be my-bucket.s3.amazonaws.com
, not endpoint - it's all working now :champagne: :tada:
The current plan is to support simple (non-chunked) uploads to a CloudFront distribution. Chunked uploads are currently not possible when targeting a CloudFront distribution since CloudFront rips off the Authorization header containing the signature before forwarding the request on to S3. The Authorization header is a required field in the request when using any of the S3 multipart upload REST calls, which are needed to support Fine Uploader S3's chunking and auto-resume features. I have opened a request in the CloudFront forums asking for this behavior to be modified so multipart uploads requests can target a CloudFront distribution.
The planned support for this is still mostly undetermined, and it is also not determined if this support will be part of 4.0. Making this part of 4.0 is possible, but looking less likely as I run into issues with CloudFront's handling of upload-related requests. I'm currently struggling to get path patterns for upload requests to work. I've opened another thread in the forum detailing my issue at https://forums.aws.amazon.com/thread.jspa?threadID=137627&tstart=0.