Open vkukk opened 1 month ago
Hi, @vkukk!
This is because of a design decision we took a long time ago, as a safe measure, to avoid users losing data. Here is another similar issue with more details: https://github.com/pulp/pulp-operator/issues/1096
From #1096 and this issue, it seems like this "field config rollback" is causing more confusion than being helpful, do you think we should allow modifying the storage fields? Right now, whenever a user needs to change the storage, we recommend doing a new installation.
@StopMotionCuber any inputs?
note: since k8s 1.29 the transition rules seems to be GA, so maybe we can revisit this instead of using the validation webhooks or the current "rollback" implementation.
Ah, the Useless Machine behavior again.
As I'm actively asked for input here, frankly I do not really get why those fields should not be changed.
This is because of a design decision we took a long time ago, as a safe measure, to avoid users losing data
I do not get how that is supposed to be the case. Thinking of user stories where this would prevent data loss but none come to my mind that is not an obvious misconfiguration from a k8s admin. A (sane) kubernetes admin is aware that changing your S3 bucket to a new one which is missing the previous data, would lead to errors. But that misconfiguration is already achievable by updating the secret itself. And if I want to have another way to shoot myself in the foot I could find 20+ ways to do so.
On the other hand, I see use cases for migrating to a new bucket (or redis cluster), doing internal cluster changes regarding secret management (e.g. migrating from directly placing secrets on the cluster to using vault). All of these might profit from updating your secret, depending on your migration workflow.
In the end though I'm not the one having to maintain the operator, so it's not on me to decide. Recreating the pulp
resource is still possible and provides a workaround for my use cases as a short downtime is acceptable for my use cases.
When I create Pulp CR, I expect the pulp-operator to read and use what I've created. Not to ignore the provided configuration.
Making mistakes is entirely different problem and should not be handled by ignoring users/admin configuration changes.
We probably won't be able to get around to changing the behavior anytime soon. We would appreciate any contributions.
We probably won't be able to get around to changing the behavior anytime soon. We would appreciate any contributions.
Could you quickly confirm that you are open to a change in behavior here (that is, not reverting the change back)? I think code-wise it's rather trivial to implement, I took a look at it the last time I stumbled upon it. I think the bigger issue here is agreeing on a way forward, and obviously as an outside contributor these kind of decision processes are hard to gain insights to.
Should the answer here be a "yes, we're open to the change" I could prepare a PR
Yes, we would be open to the change. :smile:
Version Please provide the versions of the pulp-operator and pulp images in use. helm -n pulp list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION pulp pulp 1 2024-09-03 16:54:04.919341315 +0300 EEST deployed pulp-operator-0.1.0 1.0.1-beta.4 default images
Describe the bug When changing field cache.external_cache_secret value and then applying it using 'kubectl apply -f pulp.yaml'
Now check what are the actual running Pulp CR properties:
external_cache_secret is pulp-redis-secret-88tk2thgc5 but should be pulp-redis-secret-8h6c85m7d4 To Reproduce Change cache.external_cache_secret value and appy changes. Change is not reflected in Pulp CR in Kube cluster.
Expected behavior Secret name is updated.
Deplyment is not ready due to broken secret that can't be updated to new secret.