brendanhay / amazonka

A comprehensive Amazon Web Services SDK for Haskell.
https://amazonka.brendanhay.nz
Other
599 stars 227 forks source link

Remove Bazel and replace with flake.nix #899

Closed brendanhay closed 1 year ago

brendanhay commented 1 year ago

Removes Bazel and adds a new flake.nix which explicitly doesn't provide a nix build for lib/amazonka or any of the services to avoid dealing with haskell.nix or similar alternative Haskell package sets. The intent is to keep the number of moving parts or build dependencies as minimal as possible to avoid being blocked on myself in the advent of CI failure. Instead we rely on the nix flake for build caching and toolchain setup and cabal for incremental builds/development - in which case it may be possible (untested) for users to manage GHC themselves via ghcup and just use cabal directly.

For actual contributions it is assumed developers will either use direnv to auto-magically enter a nix shell from the provided .envrc or explicitly choose which GHC compiler version to use via nix develop. The following examples outline which compiler versions are supported by this PR:

nix develop # defaults to GHC 9.2.7 currently.
nix develop .#ghc902
nix develop .#ghc927
nix develop .#ghc944
nix develop .#ghc961

Once you've entered the shell of your choice you can use cabal build etc. as usual:

cabal update
cabal build amazonka
cabal build all

CI has been updated to reflect the above steps in the build matrix for linux/macOS.

nix-precommit-hooks have been added to format/lint the shell scripts and Haskell code.

botocore has been vendored via git-subtree to vendor/botocore. The generator and related scripts have been updated to use this new subtree:

./scripts/generate
./scripts/generate-configs
./scripts/update-botocore

The README.md and CI caching configuration still need to be updated and latter tested/tweaked/massaged.

brendanhay commented 1 year ago

I'd be open to other ideas how to handle botocore - currently the subtree has been added via --squash but it still adds a lot of unnecessary shit to the already large repo.

brendanhay commented 1 year ago

As an example, maybe morally using the Bazel approach of obtaining the git SHA via curl -s -H "Accept: application/vnd.github.VERSION.sha" "https://api.github.com/repos/boto/botocore/commits/develop" and storing an untracked local temporary archive/dir in the repository root which would be fed into the generator. The SHA would guarantee reproducibility without needing us to track any botocore files or commit history.

This would just be baked into the generator CLI directly.

brendanhay commented 1 year ago

twiddles thumbs - lets see how long these CI builds take to prime the cache ..

mbj commented 1 year ago

I'd be open to other ideas how to handle botocore - currently the subtree has been added via --squash but it still adds a lot of unnecessary shit to the already large repo.

I've had decisions to do in the past and defaulted to dynamically clone the referenced repository on need.

brendanhay commented 1 year ago

Probably simpler. It's only a small subset of people who I assume want to run the generator, usually, so cloning on demand shouldn't be too much of a hit. Just store the botocore SHA/pin in configs/ and update as necessary to pull the latest service definitions.

endgame commented 1 year ago

We could instead declare botocore as a non-flake input in flake.nix. The generator is probably simple enough that you could build it using nixpkgs' haskell stuff, which means you could write a derivation which runs gen against that particular botocore pin. Then rsync the service bindings into place with a script similar to Dhall's prelude linter?

brendanhay commented 1 year ago

I've added outputs.apps.* and outputs.packages.botocore pointing to the generator and botocore data/ folder, respectively. The generate/generate-config scripts then just call these via nix * as necessary. The generator now uses the default nixpkgs haskell package set and is the default output package, so nix build builds amazonka-gen.

brendanhay commented 1 year ago

The cache of dist-newstyle and ~/.cabal/store for a single job (os + ghc version pair) is already at 2~ GB (compressed):

Cache Size: ~2070 MB (2170822777 B)

Per the GitHub actions cache docs:

There is no limit on the number of caches you can store, but the total size of all caches in a repository is limited to 10 GB. If you exceed the limit, GitHub will save the new cache but will begin evicting caches until the total size is less than the repository limit.

With the GHC versions designated by @endgame we have Linux builds of GHC 9.0.*, 9.4.*, and 9.6.*, and a single macOS build of GHC 9.4.* for a total of 4 builds * 2~ GB, keeping us within the total cache limit. Note: this only assumes the cache is tied to main push/PR events - otherwise subsequent non main pushes would evict/overwrite the cache due to the 10 GB limit.

The two generator related builds only use Cachix and not the GitHub caching allowance.

brendanhay commented 1 year ago

Total Cachix size for a clean build of all versions is 408~ MB.

brendanhay commented 1 year ago

The only remaining item I'm aware of is to update the documentation / README.md.

Aside; I'm interested to see the reaction to having a nix flake that can't actually build the service libraries (directly). 💩

ConnorBaker commented 1 year ago

@brendanhay since you're already using pre-commit-hooks.nix, would you consider enabling the cabal2nix hook? That would greatly reduce the number of IFDs required to build these packages.

brendanhay commented 1 year ago

@ConnorBaker Are you referring to IFDs from when you use amazonka as a dependency through nix - ie. you're asking for derivations per .cabal file to be checked in to the repository proper?

endgame commented 1 year ago

@brendanhay Will you have time soon to look at this? Would it help if I made a PR or commit with the straightforward suggestions? Apart from GHC version updates and other routine chores, I don't think there's really much else left before we can cut another RC.

brendanhay commented 1 year ago

Feel free to commit directly.

endgame commented 1 year ago

Okay @brendanhay I've done all the fixes. Can you have a look at the remaining two discussions and see if there's anything you want to change? If not, just hit resolve and then merge.

brendanhay commented 1 year ago

I still haven't done the README, I'll update it in another PR.