bradleyg / django-s3direct

Directly upload files to S3 compatible services with Django.
MIT License
661 stars 234 forks source link

Access to XMLHttpRequest has been blocked by CORS policy. #204

Open sskaditya opened 4 years ago

sskaditya commented 4 years ago

I have raised a new issue cause the origin is not going to the s3

https://github.com/bradleyg/django-s3direct/issues/168#issuecomment-619361901

The same issue persists even after we did all of these S3 cors settings


<CORSConfiguration>
    <CORSRule>
        <AllowedOrigin>http://localhost:8000</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>HEAD</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>DELETE</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <ExposeHeader>ETag</ExposeHeader>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

My settings.py

AWS_ACCESS_KEY_ID = ' '
AWS_SECRET_ACCESS_KEY = ' '
AWS_STORAGE_BUCKET_NAME = 'new'
AWS_S3_REGION_NAME = 'ap-south-1'
AWS_S3_ENDPOINT_URL = 'https://s3.amazonaws.com'
S3DIRECT_REGION = 'ap-south-1'
S3_DIRECT_REGION = 'ap-south-1'
S3DIRECT_DESTINATIONS = {
    'primary_destination': {
        'key': 'uploads/',
        'allowed': ['image/jpg', 'image/jpeg', 'image/png', 'video/mp4'],
    },
    'region': 'ap-south-1'
}
jsbls commented 4 years ago

Have you tried checking the IAM access policies for your bucket? I don't think it's a CORS issue, but rather an access one.

Also, try adding the region to the endpoint: AWS_S3_ENDPOINT_URL="https://s3-ap-south-1.amazonaws.com"

sskaditya commented 4 years ago

Yes I have but still the same issue

Shtaket commented 4 years ago

I have the same problem too, maybe anyone can have suggestions?

mmcc5678 commented 4 years ago

I am getting intermittent CORS Failed issues too. I have reviewed the differences between those requests that pass and fail and cannot see a difference.

UPDATE

After significant further investigation, I have found that this is related to an issue between the method used to provide AWS credentials to s3direct. I have a local dev environment which uses an IAM user and works with files upto 4Gb no problem, however my elastic beanstalk (ec2) staging environment, using the ec2 instance role consistently results in CORS/blocked/403 errors while uploading files over about 400Mb. Files always seem to start well, however usually between 15 minutes and an hour, errors occur non-stop from that point onwards.

To test my theory, I SSH'd into the ec2 instance and provided the same IAM user I was employing in dev and the problem disappeared. I have checked over the S3, django settings and IAM roles repeatedly to ensure I have followed the direction and do not have inconsistencies between dev and staging and am now convinced this is either a s3direct or boto3/aws issue.

I would LOVE it if someone had the answer! Or they could tell me what I'm likely doing wrong.

Shall I move this to a new issue?

mmcc5678 commented 4 years ago

As a fix I have used a IAM user (via env vars) instead of the role, which has fixed the issue.

I am happy to make a PR for the readme to suggest a ec2 instance role is not the best course if files are 400mb+. However I don't have a specifc explanation as I don't fully understand where the problem lies within the process so will not be able to pack up the explanation with much detail.

Please let me know if you would like me to do this.

jacopsd commented 4 years ago

Try -for test purpose- in the CORSConfiguration:

*

Also: the endpoint might be wrong. Do not use 's3', use 'glacier', e.g.: AWS_S3_ENDPOINT_URL = 'glacier.eu-west-1.amazonaws.com'

Check this list of endpoints: https://docs.aws.amazon.com/general/latest/gr/glacier-service.html