Closed NobodyXu closed 4 months ago
cc @alsuren @Milo123459 opened this PR to sandbox builds on linux using docker and gVisor.
FYI gVisor is a container runtime created by Google, that minimizes the attack surface by implementing minimal amounts of syscalls in userspace, and has its own userspace overlayfs and network stack.
On GHA it uses seccomp's systrap to trap syscalls and emulates them, on bare metal it can use KVM https://gvisor.dev/docs/architecture_guide/platforms/.
(I've not read the code yet)
What is the threat model that this is mitigating against?
Currently we have a sandbox around the job that builds the crate (each job in GitHub actions runs in an entirely separate VM): it can only communicate with the rest of the world by putting things in the job's build artifacts, and then we have a separate job (VM) that fetches the artifact and pushes it to GitHub releases.
As soon as we run our first untrusted code, we treat the whole job runner as compromised, so I don't think we gain anything by adding another layer of sandboxing inside that.
What is the threat model that this is mitigating against?
I think each job in GitHub actions still have access to some environment variables (implicitly defined) and some github context, plus root access to the VM.
For example, the cargo-build can access GITHUB_ENV
, GITHUB_OUTPUT
.
cc @alsuren For the upload action, my understanding is that the action, written in JS, will be run on the VM itself.
Otherwise any action written in JS won't have access to the filesystem in the VM and won't be able to upload anything, and AFAIK action written in JS just import fs
as normal, so I doubt there's any remote FS access there.
Since the action needs to upload data, it must have access to the secrets.GITHUB_TOKEN
, which is a bit dangerous:
At the very least, having access to it allow attackers to launch a DDOS attack using the GitHub token, prevent any job from running by exhausting the points of the restful/graphql API.
The docker container:
/proc
to access them anyway).In the future would could also limit network access.
Another thing I planned to do is, run each rustc
instance in their own, untouchable container sandboxed by gVisor.
In that way, it is then possible to use sccache to cache them.
It would also provide sandbox for proc-macros, though unfortunately I'm still unable to figure out how to sandbox build-script.
The failure is because the old version of zig write to its our installation directory, which is immutable.
The blocker for newer zig is that it rejects unknown linker flag --undefined
which is used by cargo-auditable
.
I think it makes sense to remove cargo-auditable
for now, so that we can upgrade to latest release, since the build-process could modify whatever it likes and leave process behind, it is trivial to modify it to something else to prevemt auditing.
cc @alsuren I just discovered https://github.com/cackle-rs/cackle
A tool with sandboxing and code auditing built-in, which seems better than my homebrew solution.
I will close this PR, and open a new PR to refactor/rework the build-system of quickinstall instead.
Related: #251