buchgr / bazel-remote

A remote cache for Bazel
https://bazel.build
Apache License 2.0
600 stars 156 forks source link

Readme could use a few "reasonable setup" examples #457

Open djmarcin opened 3 years ago

djmarcin commented 3 years ago

We have been using bazel-remote in the following configuration:

However we have no idea if this setup makes sense, or if we should consider a setup more like this one:

It's not clear from the readme how these two setups differ in terms of various tradeoffs, or how this might impact things like compression -- e.g. will bazel compress artifacts before sending them to the bazel-cache, or do we need the local bazel-cache to proxy the requests first?

It would be nice to have a few example setups and some of the tradeoffs to consider with each.

mostynb commented 3 years ago

Hi, adding some example configurations to the docs is a good idea- I'll try to organise that.

To answer your questions, normally I would setup a single bazel-remote instance (with the S3 backend if you wish) and configure CI and developer machines to talk to that, with whichever access you're comfortable using (eg only allow CI to write to the cache). That way you don't need to maintain cache on CI machines between jobs, and the chances of getting a cache hit from bazel-remote's disk cache layer increases (assuming it's a reasonable size for your codebase).

If you don't want developers to be able to write to the same cache as CI, you might consider running a second bazel-remote instance only for developers (possibly also using the same S3 bucket, but not uploading there).

I'm aware of a couple of benefits to running bazel-remote on client machines, both things that could be fixed in bazel itself at some point: 1) Some people use this to make bazel upload results asynchronously to a central cache, which allows bazel builds to finish sooner while uploads continue for a time afterwards. 2) This allows the use of bazel-remote's compressed storage mode, which uploads/downloads compressed blobs to the proxy backends (like S3). This can be faster than transferring uncompressed blobs. Bazel itself doesn't support compression yet, you can watch https://github.com/bazelbuild/bazel/issues/12670 for updates on that.

It is also possible to run bazel-remote on the client, and have it talk to a shared bazel-remote instance, but that doesn't currently work with compression. I should try to implement that soon.