Closed stuhood closed 6 years ago
I was thinking about lfs earlier today, but not as an auxiliary mechanism, so it seemed like we would easily overrun bandwidth limits if it was the primary way for users to fetch binary tools -- however, unless someone is intimately familiar with s3 versions, as a backup like @mateor just mentioned seems like it would be:
pantsbuild/binaries SHA <-> binary content
without error,There's no reason this can't be done in addition to versioned S3, if that's just as easy to do, just some thoughts about git-lfs
in isolation. I'm not opinionated about this, I really just like the idea of easy mapping from git SHA <-> binary content
, mostly. I think seems almost definitely superior to glacier, if its use is restricted to a private (pantsbuild) repo? Reading up on pricing schemes for all these now.
As mentioned by me elsewhere, we heavily extend binary_utils internally and we do pretty much as described immediately above.
This is how I distribute source tarballs for docker images and RPMS, native libs, or any versioned blobs that doesn't have a Pants supported distribution protocol.
Hey gang: I've updated the text to include a recommendation about where to locate the script. Please take another look.
Will ./sync-s3.sh do the right thing?
@benjyw : Yes: it only uploads new files.
This fixes #28 by ending (new) storage of binaries in this repository.
Before landing this change and switching to this workflow, we should enable versioning on the
binaries.pantsbuild.org
S3 bucket, and likely take one other defensive measure against loss of published binaries.