Open aaronborden-rivian opened 3 months ago
... and not sure if it matters, but assume we'd want to use a remote cache, like S3 or memcached.
Why don't you run sccache on each runner?
Would you expect it to be used to compile files solely on a single machine, like a repo (in would a runner would be appropriate like @Saruniks said?), or would you expect the client to send the necessary files to the server? If the latter, then we'd need to handle a more serious connection between client and server, and I imagine revamping the architecture a bit to store a copy of given projects, because I wouldn't think it would scale well once you get into large codebases, having to send them over every time. But then we'd also need to keep track of what changes, and at that point, I think we're encroaching on versioning and central repos anyways. But, with all that said, I'm not that experienced with Kubernetes. I think the biggest issue is how will the server have access to the files. If the server will have local access to the files, then I don't imagine there should be much difficulty. And in which case, I already made a branch on my clone with the necessary changes to let you modify the config file, and I can make a PR if it'll help you. But let me know what you think
It looks like the client hard-codes 127.0.0.1 as the server IP, with configurable port. Is it possible to run the server on a remote machine?
We run GitLab CI using the Kubernetes executor. I'd like to run the server as a centralized service in the cluster, or as a daemon set on the node. Conceptually similar to how Jenkins is configured where multiple build jobs use a shared server on the host. However, in Kubernetes, running as a daemonset or a centralized service requires configuring the sccache server with a remote IP (cluster service domain).
This question only refers to caching, it looks like for distributed compilation, the scheduler and build servers run on remote machines.