cockpit-project / cockpituous

Cockpit Continuous Integration and Delivery
GNU Lesser General Public License v2.1
36 stars 35 forks source link

Move to yaml resources everywhere #612

Open martinpitt opened 5 months ago

martinpitt commented 5 months ago

podman kube play can create podman secrests from k8s yaml secrets now. With that, both our OpenShift and systemd deployments can use the same input.

While at it, split the input secrets further:

martinpitt commented 5 months ago

617 does the first part of flattening the s3-keys

martinpitt commented 5 months ago

Unfortunately podman secrets don't understand k8s style secrets at all. If I have a /tmp/s.yaml with

---
apiVersion: v1
kind: Secret
metadata:
  name: foo-tokens
stringData:
  github-token: "foo bar 123"
  supi: |
    first line
    second line

Then podman play kube /tmp/s.yaml works, but it doesn't get mounted as a directory (with the keys as files) as in k8s, but as a single flat yaml file:

$ podman run -it --rm --secret=foo-tokens,target=/run/secrets/foo  quay.io/cockpit/tasks cat /run/secrets/foo
apiVersion: v1
kind: Secret
metadata:
  creationTimestamp: null
  name: foo-tokens
stringData:
  github-token: foo bar 123
  supi: |
    first line
    second line

In order for this to work, you have to pick out every single key individually with env.valueFrom.secretKeyRef.{name,key}, which is awkward.

See https://docs.podman.io/en/latest/markdown/podman-kube-play.1.html

ConfigMaps have the same problem, BTW. So here goes the dream of uniform handling..

martinpitt commented 5 months ago

It actually does work fine when using podman play kube for creating the pod as well:

---
apiVersion: v1
kind: Secret
metadata:
  name: foo-tokens
stringData:
  github-token: "foo bar 123"
  supi: |
    first line
    second line

---

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: test1
      image: quay.io/libpod/alpine_nginx:latest
      volumeMounts:
        - name: foo
          mountPath: /etc/foo
          readOnly: true
  volumes:
    - name: foo
      secret:
        secretName: foo-tokens
        optional: false
❱❱❱ podman exec -it mypod-test1 ls -l /etc/foo
total 8
-rw-r--r--    1 root     root            11 Apr  8 15:27 github-token
-rw-r--r--    1 root     root            23 Apr  8 15:27 supi

So we could go all-in on YAML and use that everywhere. quadlets even support .kube files.

https://www.redhat.com/sysadmin/multi-container-application-podman-quadlet

martinpitt commented 5 months ago

In other words, we should kube-ify the containers first (as that works with host directory secrets) and then convert the secrets -- that way it can be broken down into multiple smaller steps.

allisonkarlitskaya commented 4 months ago

So one possibility to do secrets, which falls short of putting each individual secret into bitwarden would be to use Ansible Vault. We'd check the encrypted secret vault into this repository and put the encryption passphrase into bitwarden (and manually enter it on each deployment). That should be very easy to get going.

The only downside of that is that we introduce a giant encrypted blob with all of our secrets inside of it into source control. That sucks for two reasons:

allisonkarlitskaya commented 4 months ago

I played with this a bit more and found out a couple of things:

On the Bitwarden side, this is very much possible to do this with bw, but it seems that the performance situation there is pretty awful. Each commandline invocation takes on the order of ~1s which adds up pretty quickly for the way ansible wants to interact with it. There's a rust version which is a lot faster, but it's sort of sad that we can't use the official one. We might solve that by putting all secrets into a tarball which we put into bitwarden as an attachment, but this approach doesn't seem a whole lot better than having an Ansible archive in git...

martinpitt commented 4 months ago

you don't actually need to create the container. Telling podman to create a pod with a volume in it is sufficient.

If you mean the yaml resource, then yes.

The volume ends up in the global namespace, with the name you give it,

Right, it turns into a standard podman volume.

which can then be hit by the container being started via the usual commandline interface.

Do you generally not like to start the containers via a .kube file, or do you mean to do this just as an intermediate step to yaml-ify the secrets first, and the containers later?

Unfortunately, you get the extra pod as a side-effect.

:man_shrugging: that overhead is tiny.

diffs probably aren't going to be nice to look at

Yes, I don't like this either -- this can't be the primary source of truth, just a transport format. So we still need to keep the secrets someplace else. That's also why I am not a :100: fan of bitwarden -- there's no history, no commit logs with explanations, etc. (I'm not against it, this is just something which I really like about having them in git).

somehow, even with encryption I guess it seems "weird" to have that data in a public repo

Yeah, I share the feeling. It's entirely emotional, though -- if someone can break that encryption, they can also break pretty much everything else that holds the internet together.

Thanks for your bw investigations! The 1s delay per step doesn't sound so bad really -- we only refresh secrets aller Jubeljahre, and I usually run ansible with -f20 so that it parallelizes heavily. Or does that only allow one serial access at a time?

allisonkarlitskaya commented 4 months ago

Do you generally not like to start the containers via a .kube file, or do you mean to do this just as an intermediate step to yaml-ify the secrets first, and the containers later?

For the "monitor" containers, this is fine by me, but I'm trying to imagine how this will interact with job-runner...

Is your idea to kube the monitors (providing them with the secrets) and keep the podman imperative approach which manually accesses the secrets via --volume using the names which it assumes are present because the monitor container is running?

If so, then I agree that this would be reasonable.

If you want job-runner to somehow use podman kube play to start the job containers, we're going to need some more thinking...

martinpitt commented 4 months ago

Yes, I only wanted the "static" deployments in yaml, at least for now. The job-runner instances are fine with podman run --volume. We only need to re-think this if/when we'll ever get an OpenShift with kvm support, then job-runner will want to kubectl create an actual Job object. But that's not in sight anytime soon.