bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
9.03k stars 9.22k forks source link

[bitnami/minio] 2024.1.18-debian-11-r1 cannot persiste data when pod restart, while 2023.5.18-debian-11-r2 works fine #22926

Closed robinliubin closed 7 months ago

robinliubin commented 9 months ago

Name and Version

bitnami/minio 13.2.1

What architecture are you using?

amd64

What steps will reproduce the bug?

  1. with below images in minio-values.yaml:

    image:
    registry: docker.io
    repository: bitnami/minio
    tag: 2024.1.18-debian-11-r1
    clientImage:
    registry: docker.io
    repository: bitnami/minio-client
    tag: 2024.1.18-debian-11-r1
    
    # ignored lines
    
    defaultBuckets: "test"
    
    provisioning:
    enabled: true
    policies:
      - name: test
        statements:
          - effect: "Allow"
            actions: ["s3:*"]
            resources: ["arn:aws:s3:::test", "arn:aws:s3:::test/*"]
    
    users:
      - username: 'test'
        password: 'testtest'
        policies:
          - test
  2. install minio:

helm upgrade --install minio minio-13.2.1.tgz -f minio-values.yaml -n minio
  1. run

    mc alias set myminio https://<minio-ingress-host> test testtest
  2. expecting to see Addedmyminiosuccessfully.

  3. restart minio pod manually with kubectl delete pod -l app.kubernetes.io/instance=minio -n minio

  4. wait for minio pod running, run:

    mc alias set myminio https://<minio-ingress-host> test testtest

    now error with mc: <ERROR> Unable to initialize new alias from the provided credentials. The Access Key Id you provided does not exist in our records.

  5. now change the image tags to:

    image:
    registry: docker.io
    repository: bitnami/minio
    tag: 2023.5.18-debian-11-r2
    clientImage:
    registry: docker.io
    repository: bitnami/minio-client
    tag: 2023.5.18-debian-11-r2
  6. run same steps 2-6, without error.

Are you using any custom parameters or values?

image:
  registry: docker.io
  repository: bitnami/minio
  tag: 2024.1.18-debian-11-r1
clientImage:
  registry: docker.io
  repository: bitnami/minio-client
  tag: 2024.1.18-debian-11-r1

mode: standalone
auth:
  rootUser: admin
  rootPassword: '{{ minio_admin_password }}'

defaultBuckets: "test"

provisioning:
  enabled: true
  policies:
    - name: test
      statements:
        - effect: "Allow"
          actions: ["s3:*"]
          resources: ["arn:aws:s3:::test", "arn:aws:s3:::test/*"]

  users:
    - username: 'test'
      password: 'testtest'
      policies:
        - test

containerPorts:
  api: 9000
  console: 9001

apiIngress:
  enabled: true
  hostname: ' {{ minio_ingress_host }}'
  path: "/"
  servicePort: minio-api
  ingressClassName: 'nginx'
  tls: true
  extraTls:
    - secretName: tls-secret
      hosts:
        - ' {{ minio_ingress_host }}'

ingress:
  enabled: true
  hostname: ' {{ minio_ingress_host }}'
  path: "/"
  servicePort: minio-console
  ingressClassName: 'nginx'
  tls: true

persistence:
  enabled: true
  mountPath: /data
  accessModes:
    - ReadWriteOnce
  size: '100Gi'
  annotations: { }
  existingClaim: ""

What is the expected behavior?

expecting minio persists data after pod restart

What do you see instead?

minio pod lost user credentials

juan131 commented 9 months ago

Hi @robinliubin

Please correct me if I'm wrong but AFAIK aliases are stored on the local mc configuration, see:

In other words, it's sth saved on the "client side" instead of the "server side".

We don't persist the client side config on MinIO containers, therefore it's normal to lose these aliases if the container/pod gets recreated.

robinliubin commented 9 months ago

@juan131 Thanks for helping. The issue we observed is not "client side" losing the alias, but the "server-side" lost provisioned user credentials. you can see in the values.yaml, provisioning is enabled on server-side.

however on image tag 2024.1.18-debian-11-r1, when pod is restarted, provisioned data is lost while only changing image tag to 2023.5.18-debian-11-r2, when pod is restarted, provisioned data is persisted.

juan131 commented 9 months ago

Hi @robinliubin

I was unable to reproduce the issue using the values.yaml below:

defaultBuckets: "test"
provisioning:
  enabled: true
  policies:
    - name: test
      statements:
        - effect: "Allow"
          actions: ["s3:*"]
          resources: ["arn:aws:s3:::test", "arn:aws:s3:::test/*"]
  users:
    - username: 'test'
      password: 'testtest'
      policies:
        - test

These are the steps I followed:

$ helm install minio oci://registry-1.docker.io/bitnamicharts/minio -f minio.yaml
NAME: minio
(...)
CHART NAME: minio
CHART VERSION: 13.3.4
APP VERSION: 2024.2.4
(...)
$ kubectl logs -l app.kubernetes.io/component=minio-provisioning -c minio
│ 127.0.0.1:9000 │ ✔      │
└────────────────┴────────┘

Restarted `provisioning` successfully in 503 milliseconds
Created policy `test` successfully.
Added user `test` successfully.
Attached Policies: [test]
To User: test
Enabled user `test` successfully.
End Minio provisioning
$ kubectl run --namespace default minio-client --rm --tty -i --restart='Never' --image docker.io/bitnami/minio-client:2024.1.31-debian-11-r1 --command -- mc alias set myminio http://minio:9000 test testtest
mc: Configuration written to `/.mc/config.json`. Please update your access credentials.
mc: Successfully created `/.mc/share`.
mc: Initialized share uploads `/.mc/share/uploads.json` file.
mc: Initialized share downloads `/.mc/share/downloads.json` file.
Added `myminio` successfully.
pod "minio-client" deleted
$ kubectl delete pod -l app.kubernetes.io/instance=minio
pod "minio-7fc546fdff-qqj2m" deleted
$ kubectl run --namespace default minio-client --rm --tty -i --restart='Never' --image docker.io/bitnami/minio-client:2024.1.31-debian-11-r1 --command -- mc alias set myminio http://minio:9000 test testtest
mc: Configuration written to `/.mc/config.json`. Please update your access credentials.
mc: Successfully created `/.mc/share`.
mc: Initialized share uploads `/.mc/share/uploads.json` file.
mc: Initialized share downloads `/.mc/share/downloads.json` file.
Added `myminio` successfully.
pod "minio-client" deleted
robinliubin commented 9 months ago
persistence:
  enabled: true
  mountPath: /data
  accessModes:
    - ReadWriteOnce
  size: '100Gi'
  annotations:
    "helm.sh/resource-policy": keep
  existingClaim: ""

in my test, persistence makes the diff, if this section is added, this issue is reproducible.

juan131 commented 9 months ago

Hi @robinliubin

I'm also enabling persistence in my tests (it's enabled by default). Why did you change the default mount path (see https://github.com/bitnami/charts/blob/main/bitnami/minio/values.yaml#L1012)? Please note it was replaced at https://github.com/bitnami/charts/commit/e707712fbd687ac271fdcecdf415f4f2a6aeb76e

robinliubin commented 9 months ago

tested with default mountPath: /bitnami/minio/data, now it's persisted. though I still dont understand why mountPath would lead to this issue.

juan131 commented 9 months ago

Hi @robinliubin

The new container image expects the data to be mounted on a different path, see value for MINIO_DATA_DIR:

Therefore, the mount path must be aligned with that.

robinliubin commented 8 months ago

@juan131, if it has to be static, then helm should not expose it, avoiding wrongly modifying the value

github-actions[bot] commented 8 months ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] commented 7 months ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.