Open RichardLitt opened 7 years ago
I like to keep big image data separate from the git repo, to make clone
s fast even on slow connections.
Does IPFS make sense? In the future, maybe if we ran an IPFS daemon with a webhook that looked for new IPFS hashes in the repo and automatically mirrored them.
Agreed. Another good reason to remote these from the .git repo.
IPFS would make sense. You'd know how to set that up better than me, I think. The only issue is pinning - but I guess we can have a backup bucket on AWS or something to make sure it doesn't go down?
Hm, good point: maybe something simpler would make more sense. Any solution that's so advanced that everyone can't readily maintain it will probably fall apart. Are you running an IPFS node of your own? Maybe you and I can manually pin hashes for the time being.
(Apparently there's Git Large File Storage, but it's hardly trivial to setup either.)
I'm not running a node; bit worried that only two duplicates may not work well enough.
How do normal companies do it? Isn't it just AWS? I guess that costs - is that the main issue?
Yeah, tossing everything into S3 would work. I'm usually fine with having only one or two 9s of uptime for my own things.
Although, Netlify might make it easier than s3. We could just put all of the images in a submodule - we already have one for the theme, why not one for images?
https://www.netlify.com/blog/2015/03/06/netlify-vs-amazon-s3/
Nice: that sounds much easier than mucking with S3 directly.
Is a static site the best way to store shit tonnes of photos? Or is there a better way?