Open holiman opened 5 years ago
The problem here is that we are hosting mainnet-configuration tests, which are quite large. Too large really for a normal git repository. But we still wanted to version them. And then multiple eth2 implementer teams are downloading them, some of which didn't even care to cache them in their CI unfortunately. So we reached the free limit relatively quick. Things should be better with the latest refactor, since individual identical tests between versions do not have to be redownloaded anymore (since they are split up now, not big suite files anymore). Alternatively, we could buy a little data pack, if it's not too much (less than a single work hour for the rest of the year?)
Each data pack is $5 per month, I have purchased two. We will monitor this and will reduce it if this refactor does manage to bring the bandwidth usage down.
(sorry if I jumped the gun, LMK if I should cancel it for next month)
... aaand we hit the new (bumped) limit within a couple of hours... I think some other solution is needed
Ah that's crazy, I wonder if one of the teams is still pulling in the complete repository every CI run. Will check with them, and see what others think. I hope the mainnet size tests can stay, otherwise we cut them.
I expect we might hit the 100GB limit sometime in the future, but because we are currently only using 10GB, we are needlessly introducing these bandwidth constraints on the repo with git-lfs.
Shall we downgrade to normal repo for the time being?
Never mind, I think a normal repo would have worse bandwidth restrictions. Looks like all of the teams are now caching for CI. Hope that works. Otherwise need to look into something else.
Just found this issue. My SSZ lib currently does not cache on CI, though I don't expect a lot of activity on it either. If needed, I can enable some CI caching, but no idea how, so ping me if you see something annoying so I can look into it.
@karalabe you can download the test vectors from the releases like this:
SPEC_VERSION ?=v1.5.0-alpha.3
download-tests:
mkdir -p tests/spec/eth2.0-spec-tests
wget https://github.com/ethereum/consensus-spec-tests/releases/download/$(SPEC_VERSION)/general.tar.gz -O - | tar -xz -C tests/spec/eth2.0-spec-tests
wget https://github.com/ethereum/consensus-spec-tests/releases/download/$(SPEC_VERSION)/minimal.tar.gz -O - | tar -xz -C tests/spec/eth2.0-spec-tests
wget https://github.com/ethereum/consensus-spec-tests/releases/download/$(SPEC_VERSION)/mainnet.tar.gz -O - | tar -xz -C tests/spec/eth2.0-spec-tests
This won't incur the same costs as git LFS, should be a bit faster, and the data can be cached in CI, with spec-version as key.
Hi, The ethereum organization is reaching the max quota for Git LFS usage
The quota is not the size, but the bandwidth:
When I looked into it, it appears that only this particular repo uses LFS, due to this setting: https://github.com/ethereum/eth2.0-spec-tests/blob/master/.gitattributes
I'm not quite sure what this integration does, nor how LFS works, but is this really something that is needed?
Please advice if we need this, because if we do we'll have to purchase additional data packs. cc @jpitts