piraeusdatastore / piraeus-operator

The Piraeus Operator manages LINSTOR clusters in Kubernetes.
https://piraeus.io/
Apache License 2.0
405 stars 63 forks source link

RFC: secure components via kube-rbac-proxy #180

Closed WanzenBug closed 2 years ago

WanzenBug commented 3 years ago

Right now configuring HTTPS on all components is quite cumbersome, or in some cases (monitoring with drbd-reactor) just not possible.

One solution I recently stumbled upon: https://github.com/brancz/kube-rbac-proxy. It's small proxy that integrates with the normal RBAC mechanism of kubernetes to secure endpoints. Using it as a sidecar for the linstor-controller would potentially remove the need for all these steps

To use RBAC we would need to identify which components need a specific kind of access. For example: I believe the csi-node container only GETs resources, while the csi-controller needs to POST resource groups and definitions.

AntonSmolkov commented 3 years ago

Just in case - OpenShift has similar thing out-of-the-box https://github.com/openshift/oauth-proxy

kvaps commented 2 years ago

Hi @WanzenBug, I'm going to implement this, but we have our own rbac-proxy which we run with own configuration, eg:

https://github.com/deckhouse/deckhouse/blob/b8024c00bc0c616cdceee8488bfef523d0f5086a/modules/021-kube-proxy/templates/daemonset.yaml#L106-L138

Actually we're fine with just adding sidecars: [] option to the LinstorController and LinstorSatelliteSet resources. But until I started implementing this, maybe do you have better idea?

WanzenBug commented 2 years ago

Very interesting. Having an extra sidecars: [] option could definitely work. There are a few issues if we want to properly secure the API, as we need to change the current listen address for controller and reactor from 0.0.0.0:... to 127.0.0.1:... but only if we have the sidecar rbac proxy.

I want to eventually move to a "Operator v2" that learns from all the mistakes in the current implementation. One of the issues currently is that it is quite difficult to adapt the deployments/daemonsets generated by the operator, so there is a lot of fields in our CRDs that are just "pass-through" (resource, affinity, priorityclass, even service accounts, etc...).

Some ways to address this in a v2:

In any case: probably out of scope for this issue.

kvaps commented 2 years ago

There are a few issues if we want to properly secure the API, as we need to change the current listen address for controller and reactor from 0.0.0.0:... to 127.0.0.1:... but only if we have the sidecar rbac proxy.

Yeah just had a short meeting about that right now. I think if I could implement the rbac-proxy in most common way, we can have this in operator. Otherwise, there should be an option for customizing.

I want to eventually move to a "Operator v2" that learns from all the mistakes in the current implementation. One of the issues currently is that it is quite difficult to adapt the deployments/daemonsets generated by the operator, so there is a lot of fields in our CRDs that are just "pass-through" (resource, affinity, priorityclass, even service accounts, etc...).

I like the ElasticSearch way, they providing podTemplate option, which can be user-defined. This option is setting default podTemplate for the pod, then they appending operator-defined values to this podTemplate but never override them in most cases.

User docs:

Source code:

WanzenBug commented 2 years ago

Thanks for the inspiration, that looks like exactly what we want. Only thing we have to keep in mind is upgrading our k8s go module often enough to support all new settings.