jschneier / django-storages

https://django-storages.readthedocs.io/
BSD 3-Clause "New" or "Revised" License
2.76k stars 863 forks source link

[feature request] - Add support for Cloudflare R2 #1062

Closed StevenMapes closed 6 months ago

StevenMapes commented 3 years ago

With the announcement of Cloudflare R2 it would be great if we could add in a backend to support that

akshaybabloo commented 3 years ago

Aren't they using s3 APIs? 🤔

StevenMapes commented 3 years ago

Aren't they using s3 APIs? thinking

That is true. I'm still waiting for a response to my access request so I could test it myself, was hoping someone may have already gotten access here and would be able to confirm if it does work out the box or if there are any tweaks that are required

eliezerp3 commented 2 years ago

@StevenMapes You end up testing it?

shrawanx commented 2 years ago

Hi @StevenMapes, I tested and it's working using S3Boto3Storage for private media files. Since, buckets are not public as of now didn't test the static files part/public media part.

I had written a blog regarding it at https://djangotherightway.com/using-cloudflare-r2-with-django-for-storage

timkofu commented 2 years ago

We're successfully serving Django static files from an R2 bucket on a custom domain by attaching a CF worker to it.

djch commented 1 year ago

Hi @StevenMapes, I tested and it's working using S3Boto3Storage for private media files. Since, buckets are not public as of now didn't test the static files part/public media part.

I had written a blog regarding it at https://djangotherightway.com/using-cloudflare-r2-with-django-for-storage

Thanks for the little tutorial. Unfortunately it doesn't seem to be compatible with R2's custom domains feature, which is the only way to use their caching. If you set AWS_S3_ENDPOINT_URL to the custom domain, uploads don't work, but rendering (with caching) does. And if you use AWS_S3_CUSTOM_DOMAIN, then it doesn't generate signed URLs (so uploading works and the rendering doesn't).

It's annoying, cause it's so close to being there. If you could use AWS_S3_ENDPOINT_URL for uploading but a different URL for serving the signed images it would be fine. If there's a way I've overlooked, please let me know.

mikhail-skorikov commented 1 year ago

@djch Any luck finding a way to solve this? I myself have the same problem, I think, but I'm no good at media file handling or well DevOps in general.

My files get uploaded, but I can't retrieve it unless I allow public access to the bucket and use the public bucket url, but the same does not work if using a custom domain for the public url. In private mode, S3 API uploads, but does not load the file. I didn't even look into caching yet but I will need that too.

I guess R2 is not viable to use quite yet.

djch commented 1 year ago

@mikhail-skorikov for the time being I have implemented the same solution as @timkofu and deployed a CF Worker called render to proxy requests in front of the R2 bucket on a custom domain. That seems like a viable workaround until django-storages has better (or native) support for R2 storage.

banool commented 1 year ago

Would any of you want to share all the configs / code that you've put together to make this work? It's a shame that this doesn't work natively.

timkofu commented 1 year ago

It now works as expected; create bucket, chose a region, attach a subdomain, set up CORS and voila!

Would any of you want to share all the configs / code that you've put together to make this work?

    STORAGES = {"staticfiles": {"BACKEND": "storages.backends.s3boto3.S3StaticStorage"}}
    AWS_STORAGE_BUCKET_NAME = "bucket_name"
    AWS_LOCATION = "a_folder_inside_the_bucket"
    AWS_S3_ACCESS_KEY_ID = "r2_key"
    AWS_S3_SECRET_ACCESS_KEY = "r2_secret"
    AWS_S3_CUSTOM_DOMAIN = "things.example.com"
    AWS_S3_ENDPOINT_URL = (
        "https://s3_api.url.from.r2_bucket.settings.page/" # Yes; without the appended bucket name.
    )
alexandernst commented 1 year ago

Is URL signing also working?

timkofu commented 1 year ago

Yes. I tested it with AWS CLI:

s3 presign --endpoint-url https://account_id.r2.cloudflarestorage.com  s3://private_bucket/folder_in_bucket/starfleet.png --expires-in 3600

This produced a public URL that was accessible for an hour.

alexandernst commented 1 year ago

@timkofu I tried this and while the command does output a link, the link itself doesn't work. When I try to access the URL I get this error:

<Error>
  <Code>InvalidArgument</Code>
  <Message>
    Invalid Argument: Credential access key has length 20, should be 32
  </Message>
</Error>

Does it work for you? If "yes", are you using a paid cloudflare plan?

dhess commented 1 year ago

edit: sorry, wrong repo!

alexandernst commented 1 year ago

I made some research. Basically, this won't work for custom domains. Custom domains must use HMAC validation ( https://developers.cloudflare.com/ruleset-engine/rules-language/functions/#hmac-validation ).

pirsquare commented 1 year ago

@timkofu I tried this and while the command does output a link, the link itself doesn't work. When I try to access the URL I get this error:

<Error>
  <Code>InvalidArgument</Code>
  <Message>
    Invalid Argument: Credential access key has length 20, should be 32
  </Message>
</Error>

Does it work for you? If "yes", are you using a paid cloudflare plan?

I've encountered and resolved this issue. In my case this issue is because we were using the old aws credentials. I would say you try to override the storage and inspect what the access key print.

import boto3
from storages.backends.s3boto3 import S3Boto3Storage
from storages.utils import setting

class R2Storage(S3Boto3Storage):
    def _create_session(self):
        print(f"access_key: {self.access_key}")
        print(f"secret_key: {self.secret_key}")

        if self.session_profile:
            session = boto3.Session(profile_name=self.session_profile)
        else:
            session = boto3.Session(
                    aws_access_key_id=self.access_key,
                    aws_secret_access_key=self.secret_key,
                    aws_session_token=self.security_token
            )
        return session

I don't think it's due to the custom domain thingy. We're using custom domain with no issue and I've managed to get it fully work for R2.

In our case, we inherited the S3Boto3Storage class and override the default settings accordingly.

smyja commented 9 months ago

It now works as expected; create bucket, chose a region, attach a subdomain, set up CORS and voila!

Would any of you want to share all the configs / code that you've put together to make this work?

    STORAGES = {"staticfiles": {"BACKEND": "storages.backends.s3boto3.S3StaticStorage"}}
    AWS_STORAGE_BUCKET_NAME = "bucket_name"
    AWS_LOCATION = "a_folder_inside_the_bucket"
    AWS_S3_ACCESS_KEY_ID = "r2_key"
    AWS_S3_SECRET_ACCESS_KEY = "r2_secret"
    AWS_S3_CUSTOM_DOMAIN = "things.example.com"
    AWS_S3_ENDPOINT_URL = (
        "https://s3_api.url.from.r2_bucket.settings.page/" # Yes; without the appended bucket name.
    )

This worked

alexdeathway commented 8 months ago

There is an authorization issue related to relative paths in CSS files. Similar problem were encountered in AWS https://github.com/jschneier/django-storages/issues/734 which was resolved by @dennisvang by whitelisting the files in the bucket policy, but no such solution for Cloudflare r2,Anybody with similar issues or solution?

jowparks commented 7 months ago

There is an authorization issue related to relative paths in CSS files. Similar problem were encountered in AWS #734 which was resolved by @dennisvang by whitelisting the files in the bucket policy, but no such solution for Cloudflare r2,Anybody with similar issues or solution?

This is a pretty lame workaround, but I ended up using the S3Client in nodejs instead, just needed a short script: const { S3Client } = require("@aws-sdk/client-s3");

jschneier commented 6 months ago

Docs added in #1378