sst / guide

Repo for guide.sst.dev
https://guide.sst.dev
MIT License
3.68k stars 445 forks source link

Comments: Upload a File to S3 #49

Closed jayair closed 6 years ago

jayair commented 7 years ago

Link to chapter - http://serverless-stack.com/chapters/upload-a-file-to-s3.html

geirman commented 7 years ago

I see the following 400 error in my dev console (Network > XHR) under the name ?max-keys=0. I've searched my code for 'us-east-1' and find zero results. I've converted all those instances to 'us-east-2' to match values I got back from aws. Not too sure what is causing this, and initially was concerned, but after checking the database...everything seems to have inserted into DynamoDB and uploaded to S3 correctly. ¯\(ツ)

<Error>
    <Code>AuthorizationHeaderMalformed</Code>
    <Message>The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-1'</Message>
    <Region>us-west-1</Region>
    <RequestId>39E99C1D42C6E600</RequestId>
    <HostId>tfuC/uhW4xhxPwMW+kqicWQxCdTznTrsYpM+lr40QGIyriIFysywMKlnnKqOGIKQ88SqN7SxWxE=</HostId>
</Error>
2017-04-15_1609
jayair commented 7 years ago

@geirman still having this issue? I noticed you commented on the Delete Note chapter.

geirman commented 7 years ago

Yes, it has something to do with the S3 upload. It works, but I get the error each time I attach something. No error when I just update the comment.

jayair commented 7 years ago

You get the error every time you create a new note with a file as an attachment? But the file is uploaded successfully? That's strange.

Where are you seeing this error?

<Error>
    <Code>AuthorizationHeaderMalformed</Code>
    <Message>The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-1'</Message>
    <Region>us-west-1</Region>
    <RequestId>39E99C1D42C6E600</RequestId>
    <HostId>tfuC/uhW4xhxPwMW+kqicWQxCdTznTrsYpM+lr40QGIyriIFysywMKlnnKqOGIKQ88SqN7SxWxE=</HostId>
</Error>
geirman commented 7 years ago

Everytime I create or update a note with and attach something. It's successfully uploaded though, which I agree seems strange. So it's not gating me. Would be nice to understand why it's happening though.

I found the error under Networking > XHR > click on the item with a 400 error ?max-keys=0 > Preview > then expand the Error node.

2017-04-15_2330

It's deployed now, so you can see for yourself... http://notes-app-client-geirman.s3-website-us-east-1.amazonaws.com/

jayair commented 7 years ago

I played around with your app. I think I know what's going on. The AWS JS SDK has a region set to us-east-1 but your S3 file uploads bucket is in us-west-1. Apparently, the SDK retries with the correct region, hence it ends up working. You can set the correct region before you do the upload by doing so.

  const s3 = new AWS.S3({
    region: 'us-west-1',
    params: {
      Bucket: config.s3.BUCKET,
    }
  });

The tutorial doesn't need to do this because region set for the AWS JS SDK using the AWS.config.update({ region: config.cognito.REGION }); call is the same for the S3 file uploads bucket.

You can read more about here - https://github.com/aws/aws-sdk-js/issues/986#issuecomment-217017283

geirman commented 7 years ago

Thanks @jayair, you've been a huge help. Setting the region to 'us-west-1' does resolve the problem, but I can't for the life of me figure out how that makes any sense. Everything I'm seeing indicates that the region should be 'us-east-2'. I tried 'us-east-2' for giggles, but it errored as well. Where should I have been looking to know that 'us-west-1' was the right value?

2017-04-17_1113

jayair commented 7 years ago

What about the bucket that you set up for file uploads? The one we do in this chapter - http://serverless-stack.com/chapters/create-an-s3-bucket-for-file-uploads.html

geirman commented 7 years ago

That's the one

jayair commented 7 years ago

Thanks.

I don't think the region in the URL for the AWS Console is the region of the bucket. The console does show the correct region either in the list of buckets or in the bucket page. You can see it here in this screenshot.

select-created-s3-bucket

And here is the US East (N. Virginia) - us-east-1 mapping http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region

nerdguru commented 7 years ago

I'm getting a 403 on the PUT and rechecked my CORS settings on the bucket, which look OK. What else should I be looking at to troubleshoot here?

geirman commented 7 years ago

@jayair Good call! The region thing seems like an important detail to AWS, but it gets in the way from my perspective. I wish we could abstract it away.

image

I created a codestar project this morning and as I was checking it out, I went back to my aws console and navigated back to it... and it was gone! I was confused, so I created another. Both seemed to work, so I kept scratching my head and figured out that my demo 1 was in a different region. Not sure why I switched regions, but it's a confusing detail that I continually seem to stumble on. (sorry to get off topic)

image

jayair commented 7 years ago

@nerdguru can I see the full error? I think you can expand the 403 error in the console and it might give you some info on why it's failing. Also, let's see what the url endpoint is for the PUT request.

@geirman yeah in future we might look into building something that would hopefully abstract out these details and gotchas. If you come across some ideas, send them our way 😉

nerdguru commented 7 years ago

@jayair Here's what gets output in the Chrome console:

pete-notes-app.s3.amazonaws.com/us-east-1%3A636ea0f9-5d92-41f2-86eb-93aa67b66968-1492639359454-addams.txt:1 PUT https://pete-notes-app.s3.amazonaws.com/us-east-1%3A636ea0f9-5d92-41f2-86eb-93aa67b66968-1492639359454-addams.txt 403 (Forbidden)

That path looks right to me, but you're eyes might reveal something

abagasra98 commented 7 years ago

@nerdguru I had the same problem. AWS throws a 403 error because the user permissions associated with the authorized users (of your identity pool) does not grant them access to read/write S3 data.

The solution is to go into the IAM console, go to Roles tab on the side, click on the one associated with your Identity pool. For reference, mine was called "_Cognito_notesidentitypoolAuthRole" After you're on the Summary page, click attach policy and choose the following: AmazonS3FullAccess

fwang commented 7 years ago

@abagasra98 is correct in that lack of S3 upload permission can cause the 403 error. Granting the identity pool with AmazonS3FullAccess solves the problem, but it also grants a user access to edit/remove files uploaded by other users. A very subtle tweak to the solution is to grant users edit/remove access only to files they uploaded.

@nerdguru Let's first take a look at the IAM policies assigned to the identity pool. As @abagasra98 suggested, go to IAM console, click on Roles in the left menu, click on Cognito_notesidentitypoolAuth_Role, click on Show Policy near the bottom of the page.

nerdguru commented 7 years ago

@abagasra98 and @fwang, that was it, thanks so much. I clearly missed that step when setting up the Identity Pool. I changed that policy to the one shown on that step and now it works like a champ. My .txt file I selected in the app shows up with the expected prefixed name in my bucket.

Sorry it took me so long to find the quiet time to try it out 8)

alpiepho commented 7 years ago

@jayair add the Amazons3FullAccess policy allows me to upload files now. Two questions: 1) I didn't follow comment from @fwang. Is there a way to tighten that access? (details would be appreciated) 2) did I miss a step in the tutorial?

Thanks for all the help here.

fwang commented 7 years ago

@alpiepho the policy allowing the Identity Pool to access S3 resources was defined in Create a Cognito Identity Pool chapter. When the Identity Pool was first created, we attached the following policy:

{
  "Version": "2012-10-17",
  "Statement": [
  ...,
    {
      "Effect": "Allow",
      "Action": [
        "s3:*"
      ],
      "Resource": [
        "arn:aws:s3:::YOUR_S3_UPLOADS_BUCKET_NAME/${cognito-identity.amazonaws.com:sub}*"
      ]
    }
  ]
}

This grants access to YOUR_S3_UPLOADS_BUCKET_NAME bucket, and files prefixed with the users' identity in the bucket.

hutualive commented 7 years ago

why

const uploadedFilename = (this.file)
 ? (await s3Upload(this.file, this.props.userToken)).Location
 : null;

return an URL ?

in s3Upload, the returned object just have:

return s3.upload({
    Key: filename,
    Body: file,
    ContentType: file.type,
    ACL: 'public-read',
  }).promise();

I do not see the key word like "Location".

thanks.

jayair commented 7 years ago

@hutualive This the AWS SDK docs for the upload method - http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#upload-property. It returns the Location property that we use. Our own s3Upload method is simply returning a Promise that eventually will give us the object containing the Location property.

michaelcuneo commented 7 years ago

I have an odd problem... my DynamoDB Tables and S3 File upload appear to be actually updating, if I log in to AWS console and look in the S3 Bucket, and the DynamoDB table, I'm actually getting proper data... but after a call, the Creating just spins and spins and spins... eventually I get these errors in my console.

PUT https://hrs-notes-uploads.s3-ap-southeast-2.amazonaws.com/undefined-1502368754577-IMG_0454.jpg net::ERR_CONNECTION_ABORTED
hrs-notes-uploads.s3-ap-southeast-2.amazonaws.com/?max-keys=0:1 GET https://hrs-notes-uploads.s3-ap-southeast-2.amazonaws.com/?max-keys=0 403 (Forbidden)
xhr.js?28e2:81 PUT https://hrs-notes-uploads.s3-ap-southeast-2.amazonaws.com/undefined-1502368754577-IMG_0454.jpg 403 (Forbidden)

No idea what I've done wrong.

michaelcuneo commented 7 years ago

Now I've got a new error with seemingly no changes whatsoever. :o

POST https://5qyf9lxnte.execute-api.ap-southeast-2.amazonaws.com/prod/notes 403 ()
notes:1 Fetch API cannot load https://5qyf9lxnte.execute-api.ap-southeast-2.amazonaws.com/prod/notes. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://192.168.0.10:3000' is therefore not allowed access. The response had HTTP status code 403. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
michaelcuneo commented 7 years ago

Disregard all that, I did a stupid thing. Circularly tried to push /notes ... N.B. don't do that. :)

jayair commented 7 years ago

@michaelcuneo Glad you figured it out.

designpressure commented 7 years ago

@fwang I've the same 403 error. I've verified my IAM roles policy and it is exactly as requested but I still have the problem.... what should I check? I have also verified CORS: <?xml version="1.0" encoding="UTF-8"?>

<CORSRule>
   <AllowedOrigin>*</AllowedOrigin>
   <AllowedMethod>GET</AllowedMethod>
   <MaxAgeSeconds>3000</MaxAgeSeconds>
   <AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>

and policy in IAM:

            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::notes-app-api-prod-ZzZZzzzZzzz/${cognito-identity.amazonaws.com:sub}*"
            ]
        },
jayair commented 7 years ago

@designpressure That CORS block that you posted is the default one. The one we use in the tutorial (https://serverless-stack.com/chapters/create-an-s3-bucket-for-file-uploads.html) looks like this:

<CORSConfiguration>
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>HEAD</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

Not sure if you missed it but give that a try.

designpressure commented 7 years ago

Yeah, that was the problem, now it uploads fine, thanks.

quantuminformation commented 7 years ago

Note if you get an error that says AccessDenied, your policy for the auth role is likely incorrect, for me it was the wrong bucket setting.

tommedema commented 6 years ago

Talking about security and validation: currently the size based validation is only running on the frontend:

if (this.file && this.file.size > config.MAX_ATTACHMENT_SIZE) {
    alert("Please pick a file smaller than 5MB");
    return;
}

Obviously this is not something we could rely upon. Is it possible to enforce this in the policy document for s3?

jayair commented 6 years ago

@tommedema Yeah we don't cover this in detail but I think we could. There is a way to limit this but not through the policy that you set in the AWS Console. Instead you've to generate a pre-signed with the content-length-range header.

tommedema commented 6 years ago

@jayair I don't understand. The headers would still be controlled by the client, and therefore cannot be trusted. The enforcement necessarily has to come from the server-side. Since we don't have a server, the only way (from my perspective) would be to define it in a policy document. Am I missing something?

jayair commented 6 years ago

@tommedema It does require an API on the backend to sign these headers. Here is one way AWS docs show this - https://aws.amazon.com/articles/browser-uploads-to-s3-using-html-post-forms/.

eldadmel commented 6 years ago

I have a problem when I try to create a note with a file. I always get "AccessDenied: Access Denied" alert. I checked the IAM role and the CORS and I think they are current. Is there anything else I can check?

jayair commented 6 years ago

@eldadmel Can you see the full error in the browser console?

SpencerGreene commented 6 years ago

I'm also seeing "AccessDenied: AccessDenied" alert. The browser console message is: Failed to load resource https://sg-serverless-stack-tutorial-01.s3.us-west-2.amazonaws.com/us-west-2%3A403eb53b-5dc4-47c2-9a61-340175902fd1-1513360210560-testfile.txt: the server responded with a status of 403 (Forbidden)

It's not in the client afaik - I tried your client from the github repo and the error is the same.

jayair commented 6 years ago

@SpencerGreene Check the IAM role for your Identity Pool in this chapter - https://serverless-stack.com/chapters/create-a-cognito-identity-pool.html

We set the permissions for what a client can access.

SpencerGreene commented 6 years ago

@jayair Thanks - fixed! It looked to my eye like it matched what was in that chapter, but I copy-pasted over what I had just to be sure, and sure enough it started working - so I must have fat-fingered something. (Curious - does AWS remember the history of my IAM role, so I can forensically analyze what I did wrong?)

SpencerGreene commented 6 years ago

OK here's a question -- I'm trying to implement the delete attachment "exercise for the reader." I see that the 'attachment' value stored in the database table is the full Location including the bucket name, whereas the deleteObject api wants the key in its own parameter. Is there a recommended way to strip the bucket name off of the 'attachment'? I can string-manipulate it, but that makes an assumption on the format of that string, which seems like bad practice. I was looking for an AWS api call that would take the Location and return the Key, but I don't see such api.

Another way would be to store the Key instead of the Location; or to add another field to the database and store both. What would you recommend?

jayair commented 6 years ago

@SpencerGreene As an exercise, removing the key from the url is okay. But the proper practice would be to store the key and the url.

picwell-mgeiser commented 6 years ago

I'm getting a CORS error similar but not the same as others

But...I am only getting the CORS error for the upload to the S3. The other Note functions (Create, Update, Delete) work fine in the App...just the actual file upload to the S3 bucket is failing.

I readacted the S3 bucket name:

Chrome new:1 Failed to load https://s3.amazonaws.com/-----.-----.----/us-east-1%3A4d25e339-bdc1-4c09-9c83-5b99893414ec-1514561788846-ecsv_10_2017%20%282%29.csv: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is therefore not allowed access. The response had HTTP status code 403.

FF Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://s3.amazonaws.com/-----.-----.-----/us-east-1%3A4d25e339-bdc1-4c09-9c83-5b99893414ec-1514568875798-Screen%20Shot%202017-12-24%20at%207.47.28%20PM.png. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).

I deleted and recopied the CORS configuration XML. This is what I get in the AWS console AFTER I save and come back to the CORS config

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <AllowedMethod>POST</AllowedMethod>
    <AllowedMethod>HEAD</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>

The App does retry the upload 4 times I see from the Developer console. The problem is the 403 CORS issue, however

I can Upload and delete this file to the S3 bucket directly from the console no issues (but this is CORS issue according to message)

Also, what should the ACL of the Bucket contain? We created an IAM user (I called mine reactAppAdmin) but I do not see it in the ACL. I'd suspect it would be under the "Access for other AWS accounts" section. If I need to add this to the secion, how do I determine the "Canonical User Id" for reactAppAdmin? (but this is a CORS issue, right?)

Thoughts?

picwell-mgeiser commented 6 years ago

Is this error because we're running the App from localhost and submitting to AWS without the react app setting a 'Access-Control-Allow-Origin' header?

How would we set that header in the app can we bump creating the deployed App to earlier in the tutorial and just run the deploy to the S3 App bucket more often (sounds like a good idea to give us React noobs more experience...)

I saw some results on Google where similar errors were from not setting the Region correctly. I don't see where the Region is set with the s3Upload component use in this example

jayair commented 6 years ago

@picwell-mgeiser In this chapter - https://serverless-stack.com/chapters/create-a-cognito-identity-pool.html we give permissions to our users to upload to S3. It is in the section where we add the policy for our Identity Pool. Just make sure you have completed this step properly.

picwell-mgeiser commented 6 years ago

I checked and the permissions look correct.

If I go into the IAM - Roles Cognito_NotesAppPoolAuth_role and expand the Policy and click on S3, under Write, the PutObject Resource =

BucketName | string like | pickme-sandbox-reactapp-files, ObjectPath | string like | ${cognito-identity.amazonaws.com:sub}*

Also, would the error specifically say CORS 'Access-Control-Allow-Origin' error if it were a role config error? I know error messages aren't always 100% spot on accurate, but I'd guess the CORS failure would happen before a ACL check and probably in a different component, so I'm guessing it is unlikely that an ACL error will generate a CORS 'Access-Control-Allow-Origin' error...just sayin' :)

How can I better trace and diagnose where exactly this failed?

jayair commented 6 years ago

@picwell-mgeiser AWS doesn't make it easy to debug these. The main thing I would check is for typos in these. Make sure the bucket name is correct in the roles and while you are uploading them. And the region as well.

jlissner commented 6 years ago

@picwell-mgeiser I had the same issue, I had a type in my CORS configuration on my bucket, specifically my last rule was: \<AllowedHeader>Authorization\</AllowedHeader> and it needed to be: *\<AllowedHeader>\</AllowedHeader>**

danielkaczmarczyk commented 6 years ago

I have the very same issue as @picwell-mgeiser does. Have tried all methods in this (and other) threads. I have used an Allow-Control-Origin-* chrome extension to omit the requirement by enabling cross-origin response sharing and it helps with the preflight authorization but doesn't fix the fact that I get the 301 result. The current error is Response for preflight has invalid HTTP status code 301.

jayair commented 6 years ago

@danielkaczmarczyk And does this happen just for file uploads? Or for other API calls as well?

danielkaczmarczyk commented 6 years ago

@jayair Only on file uploads. I try to send a file, first I get a 200 GET from s3 with all cors headers set correctly, and after that, the OPTIONS call gets 301'd, the very same way as @picwell-mgeiser has described.

michaelcuneo commented 6 years ago

I have a feeling that something has gone and changed on AWS end... my app worked up until about a month back and now all of a sudden it doesn’t, with zero changes on my end at all. The push to database works fine, push to S3 does not. Just like what everyone else is experiencing. I’ve been trying to track this down for ages.

On 28 Jan 2018, at 2:43 am, Daniel Kaczmarczyk notifications@github.com wrote:

@jayair Only on file uploads. I try to send a file, first I get a 200 GET from s3 with all cors headers set correctly, and after that, the OPTIONS call gets 301'd, the very same way as @picwell-mgeiser has described.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.