pulp / pulp-operator

Kubernetes Operator for Pulp 3. Under active development.
https://docs.pulpproject.org/pulp_operator/
GNU General Public License v2.0
67 stars 50 forks source link

cache.external_cache_secret does not get updated in Pulp CR #1343

Open vkukk opened 2 months ago

vkukk commented 2 months ago

Version Please provide the versions of the pulp-operator and pulp images in use. helm -n pulp list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION pulp pulp 1 2024-09-03 16:54:04.919341315 +0300 EEST deployed pulp-operator-0.1.0 1.0.1-beta.4 default images

Describe the bug When changing field cache.external_cache_secret value and then applying it using 'kubectl apply -f pulp.yaml'

apiVersion: repo-manager.pulpproject.org/v1beta2
kind: Pulp
metadata:
  name: pulp
  namespace: pulp
spec:
  object_storage_s3_secret: s3-secret

  database:
    # when changing pulp-postgresql env values, update secret suffix here
    external_db_secret: pulp-postgresql-secret-kfd8672bgh

  cache:
    enabled: true
    external_cache_secret: pulp-redis-secret-8h6c85m7d4

  api:
    replicas: 2
    resource_requirements:
      requests:
        cpu: 250m
        memory: 256Mi
      limits:
        cpu: 1
        memory: 512Mi

  content:
    replicas: 2
    resource_requirements:
      requests:
        cpu: 250m
        memory: 256Mi
      limits:
        cpu: 500m
        memory: 512Mi

  worker:
    replicas: 4
    resource_requirements:
      requests:
        cpu: 500m
        memory: 500Mi
      limits:
        cpu: 2
        memory: 1Gi

Now check what are the actual running Pulp CR properties:

$ kubectl -n pulp get pulp -oyaml
apiVersion: v1
items:
- apiVersion: repo-manager.pulpproject.org/v1beta2
  kind: Pulp
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"repo-manager.pulpproject.org/v1beta2","kind":"Pulp","metadata":{"annotations":{},"name":"pulp","namespace":"pulp"},"spec":{"api":{"replicas":2,"resource_requirements":{"limits":{"cpu":1,"memory":"512Mi"},"requests":{"cpu":"250m","memory":"256Mi"}}},"cache":{"enabled":true,"external_cache_secret":"pulp-redis-secret-8h6c85m7d4"},"content":{"replicas":2,"resource_requirements":{"limits":{"cpu":"500m","memory":"512Mi"},"requests":{"cpu":"250m","memory":"256Mi"}}},"database":{"external_db_secret":"pulp-postgresql-secret-kfd8672bgh"},"object_storage_s3_secret":"s3-secret","worker":{"replicas":4,"resource_requirements":{"limits":{"cpu":2,"memory":"1Gi"},"requests":{"cpu":"500m","memory":"500Mi"}}}}}
    creationTimestamp: "2024-09-05T08:15:23Z"
    generation: 35
    name: pulp
    namespace: pulp
    resourceVersion: "44167475287"
    uid: c4b2e685-c6d1-4ca6-ab07-ae34a165f567
  spec:
    admin_password_secret: pulp-admin-password
    api:
      gunicorn_timeout: 90
      gunicorn_workers: 2
      replicas: 2
      resource_requirements:
        limits:
          cpu: 1
          memory: 512Mi
        requests:
          cpu: 250m
          memory: 256Mi
    cache:
      enabled: true
      external_cache_secret: pulp-redis-secret-88tk2thgc5
    container_auth_private_key_name: container_auth_private_key.pem
    container_auth_public_key_name: container_auth_public_key.pem
    container_token_secret: pulp-container-auth
    content:
      gunicorn_timeout: 90
      gunicorn_workers: 2
      replicas: 2
      resource_requirements:
        limits:
          cpu: 500m
          memory: 512Mi
        requests:
          cpu: 250m
          memory: 256Mi
    database:
      external_db_secret: pulp-postgresql-secret-kfd8672bgh
    db_fields_encryption_secret: pulp-db-fields-encryption
    deployment_type: pulp
    image: quay.io/pulp/pulp-minimal
    image_pull_policy: IfNotPresent
    image_version: stable
    image_web: quay.io/pulp/pulp-web
    image_web_version: stable
    mount_trusted_ca: false
    object_storage_s3_secret: s3-secret
    pulp_secret_key: pulp-secret-key
    worker:
      replicas: 4
      resource_requirements:
        limits:
          cpu: 2
          memory: 1Gi
        requests:
          cpu: 500m
          memory: 500Mi
  status:
    admin_password_secret: pulp-admin-password
    conditions:
    - lastTransitionTime: "2024-09-09T10:01:09Z"
      message: pulp operator tasks running
      reason: OperatorRunning
      status: "False"
      type: Pulp-Operator-Finished-Execution
    - lastTransitionTime: "2024-09-09T10:01:09Z"
      message: Reconciling pulp-server Secret
      reason: UpdatingSecret
      status: "False"
      type: Pulp-API-Ready
    - lastTransitionTime: "2024-09-09T10:01:10Z"
      message: Reconciling pulp-content Deployment
      reason: UpdatingDeployment
      status: "False"
      type: Pulp-Content-Ready
    - lastTransitionTime: "2024-09-09T10:04:39Z"
      message: Worker deployment not ready yet
      reason: UpdatingWorkerDeployment
      status: "False"
      type: Pulp-Worker-Ready
    - lastTransitionTime: "2024-09-05T08:15:25Z"
      message: All Web tasks ran successfully
      reason: WebTasksFinished
      status: "True"
      type: Pulp-Web-Ready
    container_token_secret: pulp-container-auth
    db_fields_encryption_secret: pulp-db-fields-encryption
    deployment_type: pulp
    external_cache_secret: pulp-redis-secret-88tk2thgc5
    image: quay.io/pulp/pulp-minimal:stable
    last_deployment_update: "2024-09-09T10:01:09Z"
    object_storage_s3_secret: s3-secret
    pulp_secret_key: pulp-secret-key
kind: List
metadata:
  resourceVersion: ""

external_cache_secret is pulp-redis-secret-88tk2thgc5 but should be pulp-redis-secret-8h6c85m7d4 To Reproduce Change cache.external_cache_secret value and appy changes. Change is not reflected in Pulp CR in Kube cluster.

Expected behavior Secret name is updated.

Deplyment is not ready due to broken secret that can't be updated to new secret.

git-hyagi commented 2 months ago

Hi, @vkukk!

This is because of a design decision we took a long time ago, as a safe measure, to avoid users losing data. Here is another similar issue with more details: https://github.com/pulp/pulp-operator/issues/1096

From #1096 and this issue, it seems like this "field config rollback" is causing more confusion than being helpful, do you think we should allow modifying the storage fields? Right now, whenever a user needs to change the storage, we recommend doing a new installation.

@StopMotionCuber any inputs?

note: since k8s 1.29 the transition rules seems to be GA, so maybe we can revisit this instead of using the validation webhooks or the current "rollback" implementation.

StopMotionCuber commented 2 months ago

Ah, the Useless Machine behavior again.

As I'm actively asked for input here, frankly I do not really get why those fields should not be changed.

This is because of a design decision we took a long time ago, as a safe measure, to avoid users losing data

I do not get how that is supposed to be the case. Thinking of user stories where this would prevent data loss but none come to my mind that is not an obvious misconfiguration from a k8s admin. A (sane) kubernetes admin is aware that changing your S3 bucket to a new one which is missing the previous data, would lead to errors. But that misconfiguration is already achievable by updating the secret itself. And if I want to have another way to shoot myself in the foot I could find 20+ ways to do so.

On the other hand, I see use cases for migrating to a new bucket (or redis cluster), doing internal cluster changes regarding secret management (e.g. migrating from directly placing secrets on the cluster to using vault). All of these might profit from updating your secret, depending on your migration workflow.

In the end though I'm not the one having to maintain the operator, so it's not on me to decide. Recreating the pulp resource is still possible and provides a workaround for my use cases as a short downtime is acceptable for my use cases.

vkukk commented 2 months ago

When I create Pulp CR, I expect the pulp-operator to read and use what I've created. Not to ignore the provided configuration.

Making mistakes is entirely different problem and should not be handled by ignoring users/admin configuration changes.

gerrod3 commented 2 months ago

We probably won't be able to get around to changing the behavior anytime soon. We would appreciate any contributions.

StopMotionCuber commented 2 months ago

We probably won't be able to get around to changing the behavior anytime soon. We would appreciate any contributions.

Could you quickly confirm that you are open to a change in behavior here (that is, not reverting the change back)? I think code-wise it's rather trivial to implement, I took a look at it the last time I stumbled upon it. I think the bigger issue here is agreeing on a way forward, and obviously as an outside contributor these kind of decision processes are hard to gain insights to.

Should the answer here be a "yes, we're open to the change" I could prepare a PR

gerrod3 commented 2 months ago

Yes, we would be open to the change. :smile: