Closed brendanhay closed 1 year ago
I'd be open to other ideas how to handle botocore - currently the subtree has been added via --squash
but it still adds a lot of unnecessary shit to the already large repo.
As an example, maybe morally using the Bazel approach of obtaining the git SHA via curl -s -H "Accept: application/vnd.github.VERSION.sha" "https://api.github.com/repos/boto/botocore/commits/develop"
and storing an untracked local temporary archive/dir in the repository root which would be fed into the generator. The SHA would guarantee reproducibility without needing us to track any botocore files or commit history.
This would just be baked into the generator CLI directly.
twiddles thumbs - lets see how long these CI builds take to prime the cache ..
I'd be open to other ideas how to handle botocore - currently the subtree has been added via --squash but it still adds a lot of unnecessary shit to the already large repo.
I've had decisions to do in the past and defaulted to dynamically clone the referenced repository on need.
Probably simpler. It's only a small subset of people who I assume want to run the generator, usually, so cloning on demand shouldn't be too much of a hit. Just store the botocore SHA/pin in configs/
and update as necessary to pull the latest service definitions.
We could instead declare botocore as a non-flake input in flake.nix
. The generator is probably simple enough that you could build it using nixpkgs' haskell stuff, which means you could write a derivation which runs gen
against that particular botocore pin. Then rsync the service bindings into place with a script similar to Dhall's prelude linter?
I've added outputs.apps.*
and outputs.packages.botocore
pointing to the generator and botocore data/
folder, respectively. The generate/generate-config
scripts then just call these via nix *
as necessary. The generator now uses the default nixpkgs haskell package set and is the default output package, so nix build
builds amazonka-gen
.
The cache of dist-newstyle
and ~/.cabal/store
for a single job (os + ghc version pair) is already at 2~ GB
(compressed):
Cache Size: ~2070 MB (2170822777 B)
Per the GitHub actions cache docs:
There is no limit on the number of caches you can store, but the total size of all caches in a repository is limited to 10 GB. If you exceed the limit, GitHub will save the new cache but will begin evicting caches until the total size is less than the repository limit.
With the GHC versions designated by @endgame we have Linux builds of GHC 9.0.*
, 9.4.*
, and 9.6.*
, and a single macOS build of GHC 9.4.*
for a total of 4 builds * 2~ GB, keeping us within the total cache limit. Note: this only assumes the cache is tied to main
push/PR events - otherwise subsequent non main
pushes would evict/overwrite the cache due to the 10 GB limit.
The two generator related builds only use Cachix and not the GitHub caching allowance.
Total Cachix size for a clean build of all versions is 408~ MB
.
The only remaining item I'm aware of is to update the documentation / README.md
.
Aside; I'm interested to see the reaction to having a nix flake that can't actually build the service libraries (directly). 💩
@brendanhay since you're already using pre-commit-hooks.nix, would you consider enabling the cabal2nix
hook? That would greatly reduce the number of IFDs required to build these packages.
@ConnorBaker Are you referring to IFDs from when you use amazonka as a dependency through nix - ie. you're asking for derivations per .cabal
file to be checked in to the repository proper?
@brendanhay Will you have time soon to look at this? Would it help if I made a PR or commit with the straightforward suggestions? Apart from GHC version updates and other routine chores, I don't think there's really much else left before we can cut another RC.
Feel free to commit directly.
Okay @brendanhay I've done all the fixes. Can you have a look at the remaining two discussions and see if there's anything you want to change? If not, just hit resolve and then merge.
I still haven't done the README, I'll update it in another PR.
Removes Bazel and adds a new
flake.nix
which explicitly doesn't provide a nix build forlib/amazonka
or any of the services to avoid dealing withhaskell.nix
or similar alternative Haskell package sets. The intent is to keep the number of moving parts or build dependencies as minimal as possible to avoid being blocked on myself in the advent of CI failure. Instead we rely on the nix flake for build caching and toolchain setup and cabal for incremental builds/development - in which case it may be possible (untested) for users to manage GHC themselves viaghcup
and just use cabal directly.For actual contributions it is assumed developers will either use
direnv
to auto-magically enter a nix shell from the provided .envrc or explicitly choose which GHC compiler version to use vianix develop
. The following examples outline which compiler versions are supported by this PR:Once you've entered the shell of your choice you can use
cabal build
etc. as usual:CI has been updated to reflect the above steps in the build matrix for linux/macOS.
nix-precommit-hooks have been added to format/lint the shell scripts and Haskell code.
botocore has been vendored via
git-subtree
tovendor/botocore
. The generator and related scripts have been updated to use this new subtree:The
README.md
and CI caching configuration still need to be updated and latter tested/tweaked/massaged.