Open colemickens opened 1 year ago
Or alternatively, I suppose attic can still serve up the narinfos and then redirect the user to a different URL template (rather than the s3 assumed url).
This seems (without looking too much at the code) like it might allow the user to (with disabled chunking) store the nar info via s3 normally, then have a configured publicBlobAccessTemplate
or something better named, that attic then serves the redirects to.
But, doesn't attic need the NAR itself to sign it? I guess I keep coming back to the question of why it signs on download instead of upload. Is it about simplifying the key replacement story?
Completely missed this issue 😕 My inbox is actually a disaster...
Yes, this usecase is fully supported. Downloading a NAR with only one chunk always causes a redirect to a presigned S3 URL, regardless of whether chunking is enabled or not. When you disable chunking with nar-size-threshold = 0
, all new NARs will be uploaded as a single chunk.
As for CDN integration, I'm personally putting a Cloudflare Worker in front of my personal instance. You do, however, need to be careful about authentication when caching. Furthermore, Cloudflare has a fairly low upload limit (100MB) unless you pay for Business/Enterprise, but you can sidestep it by setting api-endpoint
and substituter-endpoint
to different values (this is the exact usecase for the separate config).
Edit:
But, doesn't attic need the NAR itself to sign it?
The signature is in .narinfo
and the contents of .nar
s are fully static. Attic only needs to know the content hashes of .nars
in order to generate a signature (more here).
I guess I keep coming back to the question of why it signs on download instead of upload. Is it about simplifying the key replacement story?
Yes, and also to simplify implementation. The caching of computed .narinfo
s can be done via some caching layer like Cloudflare Worker as I mentioned.
Thanks, that's a lot of helpful detail. It seems you've designed for a variety of use-cases already. Might I be nosy and inquire for more details?
It seems like chunking would avoid the size limitation, and further necessitates the Worker, so why have you seemingly already considered this use-case (With the api/substituter endpoints)?
I guess the only thing I can think of is... a scenario where the user is not on CF, so doesn't have CF Workers, but still wants to leverage S3-compatible storage via a CDN? I'm not sure why that would be super helpful though, unless they're horizontally scaling attic?
As far as I can tell, users that are using R2 could potentially be accessing their storage through a CloudFlare CDN-enabled URL and benefit from the CDN, for free.
Would it be possible disable chunking, sign on upload, instead of download, and then use the blob storage directly, rather than having to go through the attic server?