Closed WanzenBug closed 2 years ago
Just in case - OpenShift has similar thing out-of-the-box https://github.com/openshift/oauth-proxy
Hi @WanzenBug, I'm going to implement this, but we have our own rbac-proxy which we run with own configuration, eg:
Actually we're fine with just adding sidecars: []
option to the LinstorController and LinstorSatelliteSet resources.
But until I started implementing this, maybe do you have better idea?
Very interesting. Having an extra sidecars: []
option could definitely work. There are a few issues if we want to properly secure the API, as we need to change the current listen address for controller and reactor from 0.0.0.0:...
to 127.0.0.1:...
but only if we have the sidecar rbac proxy.
I want to eventually move to a "Operator v2" that learns from all the mistakes in the current implementation. One of the issues currently is that it is quite difficult to adapt the deployments/daemonsets generated by the operator, so there is a lot of fields in our CRDs that are just "pass-through" (resource, affinity, priorityclass, even service accounts, etc...).
Some ways to address this in a v2:
Move back from the "all in go" approach, back to letting more things be managed by helm. There would potentially only be CRDs for some of the "special" features the operator currently supports, such as setting up storage pools, etc. But that would also make it harder to implement some non-obvious features such as the restart of the csi-node pods when labels change, or backup of LINSTOR DB resources. It would also mean we still need to implement every new nob added to deployments/pods specs in some way in helm, because people want to customize some weird stuff.
Re-use something like kustomize
: the idea being, there is a certain "default" resource integrated with the operator, the CRDs are only used for LINSTOR specific stuff, and any customization for deployments happens via a free-form config map that provides the necessary changes to the default deployment via overlays/patches.
In any case: probably out of scope for this issue.
There are a few issues if we want to properly secure the API, as we need to change the current listen address for controller and reactor from 0.0.0.0:... to 127.0.0.1:... but only if we have the sidecar rbac proxy.
Yeah just had a short meeting about that right now. I think if I could implement the rbac-proxy in most common way, we can have this in operator. Otherwise, there should be an option for customizing.
I want to eventually move to a "Operator v2" that learns from all the mistakes in the current implementation. One of the issues currently is that it is quite difficult to adapt the deployments/daemonsets generated by the operator, so there is a lot of fields in our CRDs that are just "pass-through" (resource, affinity, priorityclass, even service accounts, etc...).
I like the ElasticSearch way, they providing podTemplate
option, which can be user-defined.
This option is setting default podTemplate for the pod, then they appending operator-defined values to this podTemplate but never override them in most cases.
User docs:
Source code:
Thanks for the inspiration, that looks like exactly what we want. Only thing we have to keep in mind is upgrading our k8s go module often enough to support all new settings.
Right now configuring HTTPS on all components is quite cumbersome, or in some cases (monitoring with drbd-reactor) just not possible.
One solution I recently stumbled upon: https://github.com/brancz/kube-rbac-proxy. It's small proxy that integrates with the normal RBAC mechanism of kubernetes to secure endpoints. Using it as a sidecar for the linstor-controller would potentially remove the need for all these steps
To use RBAC we would need to identify which components need a specific kind of access. For example: I believe the csi-node container only
GET
s resources, while the csi-controller needs toPOST
resource groups and definitions.