Closed robinliubin closed 7 months ago
Hi @robinliubin
Please correct me if I'm wrong but AFAIK aliases are stored on the local mc configuration, see:
In other words, it's sth saved on the "client side" instead of the "server side".
We don't persist the client side config on MinIO containers, therefore it's normal to lose these aliases if the container/pod gets recreated.
@juan131 Thanks for helping. The issue we observed is not "client side" losing the alias, but the "server-side" lost provisioned user credentials. you can see in the values.yaml, provisioning is enabled on server-side.
however on image tag 2024.1.18-debian-11-r1
, when pod is restarted, provisioned data is lost
while only changing image tag to 2023.5.18-debian-11-r2
, when pod is restarted, provisioned data is persisted.
Hi @robinliubin
I was unable to reproduce the issue using the values.yaml below:
defaultBuckets: "test"
provisioning:
enabled: true
policies:
- name: test
statements:
- effect: "Allow"
actions: ["s3:*"]
resources: ["arn:aws:s3:::test", "arn:aws:s3:::test/*"]
users:
- username: 'test'
password: 'testtest'
policies:
- test
These are the steps I followed:
$ helm install minio oci://registry-1.docker.io/bitnamicharts/minio -f minio.yaml
NAME: minio
(...)
CHART NAME: minio
CHART VERSION: 13.3.4
APP VERSION: 2024.2.4
(...)
$ kubectl logs -l app.kubernetes.io/component=minio-provisioning -c minio
│ 127.0.0.1:9000 │ ✔ │
└────────────────┴────────┘
Restarted `provisioning` successfully in 503 milliseconds
Created policy `test` successfully.
Added user `test` successfully.
Attached Policies: [test]
To User: test
Enabled user `test` successfully.
End Minio provisioning
mc alias
command:$ kubectl run --namespace default minio-client --rm --tty -i --restart='Never' --image docker.io/bitnami/minio-client:2024.1.31-debian-11-r1 --command -- mc alias set myminio http://minio:9000 test testtest
mc: Configuration written to `/.mc/config.json`. Please update your access credentials.
mc: Successfully created `/.mc/share`.
mc: Initialized share uploads `/.mc/share/uploads.json` file.
mc: Initialized share downloads `/.mc/share/downloads.json` file.
Added `myminio` successfully.
pod "minio-client" deleted
mc alias
command:$ kubectl delete pod -l app.kubernetes.io/instance=minio
pod "minio-7fc546fdff-qqj2m" deleted
$ kubectl run --namespace default minio-client --rm --tty -i --restart='Never' --image docker.io/bitnami/minio-client:2024.1.31-debian-11-r1 --command -- mc alias set myminio http://minio:9000 test testtest
mc: Configuration written to `/.mc/config.json`. Please update your access credentials.
mc: Successfully created `/.mc/share`.
mc: Initialized share uploads `/.mc/share/uploads.json` file.
mc: Initialized share downloads `/.mc/share/downloads.json` file.
Added `myminio` successfully.
pod "minio-client" deleted
persistence:
enabled: true
mountPath: /data
accessModes:
- ReadWriteOnce
size: '100Gi'
annotations:
"helm.sh/resource-policy": keep
existingClaim: ""
in my test, persistence
makes the diff, if this section is added, this issue is reproducible.
Hi @robinliubin
I'm also enabling persistence in my tests (it's enabled by default). Why did you change the default mount path (see https://github.com/bitnami/charts/blob/main/bitnami/minio/values.yaml#L1012)? Please note it was replaced at https://github.com/bitnami/charts/commit/e707712fbd687ac271fdcecdf415f4f2a6aeb76e
tested with default mountPath: /bitnami/minio/data
, now it's persisted.
though I still dont understand why mountPath would lead to this issue.
Hi @robinliubin
The new container image expects the data to be mounted on a different path, see value for MINIO_DATA_DIR
:
Therefore, the mount path must be aligned with that.
@juan131, if it has to be static, then helm should not expose it, avoiding wrongly modifying the value
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
Name and Version
bitnami/minio 13.2.1
What architecture are you using?
amd64
What steps will reproduce the bug?
with below images in minio-values.yaml:
install minio:
run
expecting to see
Added
myminiosuccessfully.
restart minio pod manually with
kubectl delete pod -l app.kubernetes.io/instance=minio -n minio
wait for minio pod running, run:
now error with
mc: <ERROR> Unable to initialize new alias from the provided credentials. The Access Key Id you provided does not exist in our records.
now change the image tags to:
run same steps 2-6, without error.
Are you using any custom parameters or values?
What is the expected behavior?
expecting minio persists data after pod restart
What do you see instead?
minio pod lost user credentials