brigadecore / brigade

Event-driven scripting for Kubernetes
https://brigade.sh/
Apache License 2.0
2.41k stars 246 forks source link

Brigade 2.x - Support for PVCs in jobs #1942

Closed harrylepotter-win closed 2 years ago

harrylepotter-win commented 2 years ago

In brigade 1.x there was the ability to mount shared pvcs within brigade.js. eg


 var volume = {
    name: "static-content",
    persistentVolumeClaim: {
      claimName: "example-builds"
    }
  };

  var volumeMount = {
    name: "static-content",
    mountPath: MOUNT_POINT
  };

  job.volumes = [volume];
  job.volumeMounts = [volumeMount];

This was incredibly handy, but does not appear to be in Brigade 2.x. Am i missing something?

krancour commented 2 years ago

Hi @harrylepotter-win, thanks for the question -- and a good one at that.

You've probably noticed that Brigade 2 abstracts Kubernetes and relegates it to an implementation detail, so there isn't really direct support for these sort of k8s-isms in Brigade 2.

With that being said, there are a couple places already where we left "escape hatches" so that people who really know what they are doing with Kubernetes (and have direct access to the underlying cluster) can "drop down" to k8s to implement some more advanced use cases.

If you can help me understand your use case, I can either help you find an idiomatic Brigade 2 approach or I can start thinking about a way we could build support for your use case without undermining the abstraction.

harrylepotter-win commented 2 years ago

Hey @krancour, thanks for the response. We were using brigade 1.x as part of a pipeline that would publish customer builds and store those in a common shared volume, which would then be accessible by a web server to host them.

I've been hacking around with the API server, in particular v2/apiserver/internal/api/kubernetes/substrate.go::createJobPod() and have attempted to hack-in the volume mounts there. Unfortunately this comes at the cost of me deviating from the mainline branch. The other disadvantage i found between 1.x and 2.x is that each project creates its own namespace, as opposed to 1.x where every job ran in the same namespace - we used to put the pvc/web service pod in this namespace). We're using an efs volume.. Seems like a possible workaround to the latter would be to create a separate PVC in the project namespace to the same PV, although we havent had much luck with this yet...

Ideal config would be something that could be configured at a worker spec level - eg:

....
workerTemplate:
    kubernetes:
       createProjectNamespace: false
       volumeClaims:
           - name: someClaim
             claimName: 'example-builds'

is there a better way of achieving this using what's provided right now?

krancour commented 2 years ago

In v2, Kubernetes is an implementation detail. Brigade is hosted on it and uses it to orchestrate work, but it's purposefully abstracted away from the end-user. The major consideration behind that pivot was a desire to make Brigade useful for developers who either don't know k8s and/or lack direct access to the underlying cluster.

The project-per-namespace pattern you have pointed out was both a frequently requested feature by v1 users and an unavoidable consequence of making Brigade usable by developers who lack direct access to the underlying cluster. If two projects could occupy the same Kubernetes namespace, a developer could abuse their access to one project to access resources belonging to the other project. (This is also why Brigade assigns a namespace to each project and doesn't let you choose. Imagine anyone being able to just choose to run stuff in any existing namespace they wanted.)

In short, it's perhaps best to forget about k8s when using v2. That was the goal -- to get k8s out of the way. Ask yourself how you'd approach this same problem if you were, for instance, using some CI/CDaaS, such as Circle or Travis.

We were using brigade 1.x as part of a pipeline that would publish customer builds and store those in a common shared volume, which would then be accessible by a web server to host them.

In a general sense, that's still a workable approach -- and I actually do this sort of thing all the time with v2. The only thing that changes is that you can no longer mount the volume directly to the job's pod, but that shouldn't stop you from writing your artifacts to that same volume through other means. You mentioned a web server that already mounts that volume. Could you, for instance, consider the possibility of POSTing the artifacts from your job to that web server? Or if HTTP doesn't suit you, could you run an SSH server alongside your web server and SCP the artifacts?

krancour commented 2 years ago

Given that I haven't heard anything further from @harrylepotter-win and also given that the requested functionality undermines a deliberate design decision, I'm closing this issue for now, but if @harrylepotter-win or anyone else wishes to re-open it, I would be happy to have more discussion about approaches that work within the confines of Brigade 2's k8s abstraction.