1Password / connect-helm-charts

Official 1Password Helm Charts
https://developer.1password.com
MIT License
93 stars 74 forks source link

Setting a namespace to watch prevents the cluster role bind from being created. #112

Closed Wopple closed 1 year ago

Wopple commented 2 years ago

What happened?

No cluster role bind was created causing the operator pod to get stuck in a crash backoff loop.

What did you expect to happen?

The operator pod should work as intended.

Steps to reproduce

helm install connect 1password/connect \
    -n 1pass \
    --create-namespace \
    --set operator.create=true \
    --set operator.watchNamespace={default}

Notes & Logs

This seems to be the offending line:

https://github.com/1Password/connect-helm-charts/blob/main/charts/connect/templates/clusterrolebinding.yaml#L2

It doesn't create the binding when you specify a namespace to watch. If this is intended, then close this issue.

irons commented 2 years ago

Previously: https://github.com/1Password/connect-helm-charts/issues/66#issuecomment-937612504

I would also like to understand the rationale here.

irons commented 2 years ago

OK, it does make sense now. The clusterrolebinding, being cluster-wide, is only appropriate when the operator is being asked to watch all namespaces. If you specify a list of namespaces to watch, a rolebinding is created (name onepassword-connect-operator, kind ClusterRole; this is correct) in just those namespaces.

I was also seeing a failure yesterday which led me to think that the namespaced roles were not being created correctly, but as I dug into it more closely, that problem ceased to be reproducible, so I still don't know what was going on. @Wopple, your problem does not seem to be universal, so you might want to provide more information.

Troubleshooting this got easier when I re-learned the syntax for querying effective permissions for service accounts:

$ kubectl auth can-i create secrets -n AUTHORIZED_NAMESPACE --as system:serviceaccount:MY1PCNAMESPACE:onepassword-connect-operator
yes

$ kubectl auth can-i create secrets -n NOT_AUTHORIZED_NAMESPACE --as system:serviceaccount:MY1PCNAMESPACE:onepassword-connect-operator
no
Wopple commented 2 years ago

I'm not convinced. You ought to be able to change which namespaces are being watched by editing the deployment after installing the helm chart. A cluster role binding is what you want for that to work or you're manually managing role bindings each time. But I will try providing more details because this has been 100% reproducible for me.

irons commented 2 years ago

The reason I want to constrain the operator to a set of namespaces is to have more than one operator in a cluster, with access tokens for non-overlapping vaults, operating in distinct namespaces. This is incompatible with giving any one operator serviceaccount a clusterrolebinding, when my goal is explicitly to limit each operator's reach.

If you want to modify the namespaces being watched, do that by editing and upgrading the helm release, not by modifying a deployment. Let the chart manage the role bindings for the current set of namespaces. If you're choosing to manually edit resources owned by helm, you're squarely in the land of undefined behavior.

Wopple commented 2 years ago

Maybe this is a difference of philosophy. I want tools, not frameworks. If helm were to "own" the artifacts in my kube cluster, then it is acting like a framework. In my view, it should be such that I can modify what is in kube with kube regardless of the tool that helped create it. Maybe I'm the one who used the helm chart but someone else wants to service the deployment. Now there is additional communication necessary surrounding the helm chart. That's friction I want to avoid if it is unnecessary.

I am curious what use case you have. What is wrong with just having the one deployment and give the token the permissions which are a union of all the granular tokens? I can see why this may be behavior you would want, but I certainly would at least like the option to use cluster role binding and specify a list of namespaces to watch. I mean, it is an option now, but it requires an unintuitive initialization.

irons commented 2 years ago

The use case is that the cluster's users and their access rights are not monolithic. Developers from different groups should have access to their own secrets, but not each other's, and not the secrets of the SRE team operating a bunch of foundational services. As in your scenario, some namespaces like default shouldn't have any access to 1Password secrets.

Your preference for not having to interact with the helm chart after first use is fine for charts you write and maintain yourself, but expecting other people to adhere to those restrictions will lead to disappointment. While I don't work or speak for agilebits, the implied contract between a helm chart author and its users always runs through the chart. The helm release does in fact own the resources it deploys into the cluster, no scare quotes required, and you alter them at your own risk.

Wopple commented 2 years ago

The thing about tools is they don't prevent you from using them the way you want. If you want to manage the resources with the chart, you can. I'm advocating for both of our use cases to be supported.

It's not an expectation it's an advocation. Scare quotes? They are air quotes because the source of truth is the state of the cluster. Helm is just a templating macro over what is otherwise kube native. Helm doesn't have it's own state so it doesn't own anything.

ghost commented 2 years ago

Hi @Wopple - thanks for taking the time to file this issue, and thank you to both you and @irons for this discussion of expectations/opinions on what the behaviour of clusterrolebinding should be when using --set operator.watchNamespace.

After discussing this one internally with the team we're not sure which behaviour is better/preferred, but we'd love to continue this discussion in order to gain a better understanding of both of your perspectives.

We do have one follow-up question: could this issue potentially be resolved by adding an option to force the creation of the clusterrolebinding and (if this is possible in Helm) warn if you're using watchNamespace while the clusterrolebinding does not exist?

Wopple commented 2 years ago

Something to that effect would be good IMO. I would like to see both use cases supported if possible. I don't know if a warning is necessary, for example:

watchNamespace
[] (default) [some, values]
clusterRole false (default) ClusterRoleBind RoleBind
true ClusterRoleBind ClusterRoleBind

The false / [] case is ClusterRoleBind because it is watching all namespaces. It's a bit of an unfortunate interface that clusterRole gets ignored in this case, but the only way to resolve that would be to change the meaning of [] to mean 'none' instead of 'all.'


More broadly speaking, I always prefer tools over frameworks. Meaning, I don't want something that takes control or expects you to do things according to its constraints. Related to this project (or any helm project), I want it to be straightforward to modify the kubernetes resources directly with kubectl or with the helm chart for a more batteries included experience.

jillianwilson commented 1 year ago

Thanks for the feedback and interesting discussion. As it does not seem like this is a bug anymore we are going to close this issue. We always want to be improving our platform and how our tools can be used so we'll be taking this away and put it on our radar for future improvements.