bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.92k stars 9.18k forks source link

[bitnami/minio] Access Denied when Deploying Bitnami/MinIO 2021.3.26-debian-10-r1 #5951

Closed ethernoy closed 2 years ago

ethernoy commented 3 years ago

Which chart: MinIO ( 6.7.1 )

Describe the bug Encountered access denied error when deploying image Bitnami/MinIO 2021.3.26-debian-10-r1 using distributed mode. MinIO container restarts after this error occurs.

To Reproduce Steps to reproduce the behavior:

Deploy Bitnami/MinIO 2021.3.26-debian-10-r1 using distributed mode.

Expected behavior MinIO works normally

Version of Helm and Kubernetes:

version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5+vmware.1", GitCommit:"1abde2b816bac0da89c6c71360799c681094ca0e", GitTreeState:"clean", BuildDate:"2020-06-29T22:31:51Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Additional context The kernel I am running MinIO on is 4.19.129-1.ph3-esx. Admission controller is enabled and MinIO is granted root permission.

marcosbc commented 3 years ago

Hi @ethernoy, could you share more details on how you're deploying Minio? It works for me.

$ helm install myminio bitnami/minio --set mode=distributed
$ kubectl get pods
...
myminio-0                                  1/1     Running            0          8m27s
myminio-1                                  1/1     Running            0          8m27s
myminio-2                                  1/1     Running            0          8m27s
myminio-3                                  1/1     Running            0          8m27s
marcosbc commented 3 years ago

Also, make sure that there isn't any PVC from a previous deployment. If not, it may fail due to deploying with the wrong credentials, causing errors like:

API: SYSTEM()
Time: 08:39:27 UTC 03/30/2021
Error: Marking http://myminio-1.myminio-headless.default.svc.cluster.local:9000/minio/storage/data/v29 temporary offline; caused by Post "http://myminio-1.myminio-headless.default.svc.cluster.local:9000/minio/storage/data/v29/readall?disk-id=&file-path=format.json&volume=.minio.sys": lookup myminio-1.myminio-headless.default.svc.cluster.local on 10.30.240.10:53: no such host (*fmt.wrapError)
       6: cmd/rest/client.go:138:rest.(*Client).Call()
       5: cmd/storage-rest-client.go:151:cmd.(*storageRESTClient).call()
       4: cmd/storage-rest-client.go:471:cmd.(*storageRESTClient).ReadAll()
       3: cmd/format-erasure.go:405:cmd.loadFormatErasure()
       2: cmd/format-erasure.go:325:cmd.loadFormatErasureAll.func1()
       1: pkg/sync/errgroup/errgroup.go:122:errgroup.(*Group).Go.func1()
ethernoy commented 3 years ago

Hi @marcosbc

Attached is the content of value.yaml I use in the deployment:

global:
  imagePullSecrets: 
  - mySecret
  storageClass: myStorageClass
image:
  registry: myregistry
  repository: observability/bitnami/minio
  tag: 2021.3.26-debian-10-r1
  pullPolicy: Always
  debug: true
clientImage:
  registry: myregistry
  repository: observability/bitnami/minio-client
  tag: 2021.3.23-debian-10-r5
mode: distributed
accessKey:
  password: thanos123
  forcePassword: true
secretKey:
  password: thanos123
  forcePassword: true
defaultBuckets: "thanos"
statefulset:
  updateStrategy: RollingUpdate
  podManagementPolicy: Parallel
  replicaCount: 4
  zones: 1
  drivesPerNode: 1
securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 0
resources:
  limits:
    cpu: 300m
    memory: 512Mi
  requests:
    cpu: 256m
    memory: 256Mi
persistence:
  size: 10Gi

here is the content of myStorageClass:

allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  creationTimestamp: "2021-03-25T03:26:52Z"
  name: myStorageClass
parameters:
  svStorageClass: myStorageClass
provisioner: csi.vsphere.vmware.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
marcosbc commented 3 years ago

Please also share more information like the deployment error you are seeing before the pod gets restarted.

Did you check if there could exist an existing PVC where Minio data is stored with different credentials than the ones you set?

ethernoy commented 3 years ago

Please also share more information like the deployment error you are seeing before the pod gets restarted.

Did you check if there could exist an existing PVC where Minio data is stored with different credentials than the ones you set?

Most of the MinIO ends with the following log pattern, while seldomly it has other logs before it terminates, but I did not manage to capture one yet.

API: SYSTEM()
Time: 02:43:48 UTC 03/31/2021
Error: Marking http://minio-test-2.minio-test-headless.observability.svc.cluster.local:9000/minio/storage/data/v29 temporary offline; caused by Post "http://minio-test-2.minio-test-headless.observability.svc.cluster.local:9000/minio/storage/data/v29/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 192.168.12.40:9000: connect: connection refused (*fmt.wrapError)
       6: cmd/rest/client.go:138:rest.(*Client).Call()
       5: cmd/storage-rest-client.go:151:cmd.(*storageRESTClient).call()
       4: cmd/storage-rest-client.go:471:cmd.(*storageRESTClient).ReadAll()
       3: cmd/format-erasure.go:405:cmd.loadFormatErasure()
       2: cmd/format-erasure.go:325:cmd.loadFormatErasureAll.func1()
       1: pkg/sync/errgroup/errgroup.go:122:errgroup.(*Group).Go.func1()
Waiting for a minimum of 2 disks to come online (elapsed 8s)
 02:43:48.95 INFO  ==> Adding local Minio host to 'mc' configuration...
API: SYSTEM()
Time: 02:43:49 UTC 03/31/2021
Error: Marking http://minio-test-2.minio-test-headless.observability.svc.cluster.local:9000/minio/storage/data/v29 temporary offline; caused by Post "http://minio-test-2.minio-test-headless.observability.svc.cluster.local:9000/minio/storage/data/v29/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 192.168.12.40:9000: connect: connection refused (*fmt.wrapError)
       6: cmd/rest/client.go:138:rest.(*Client).Call()
       5: cmd/storage-rest-client.go:151:cmd.(*storageRESTClient).call()
       4: cmd/storage-rest-client.go:471:cmd.(*storageRESTClient).ReadAll()
       3: cmd/format-erasure.go:405:cmd.loadFormatErasure()
       2: cmd/format-erasure.go:325:cmd.loadFormatErasureAll.func1()
       1: pkg/sync/errgroup/errgroup.go:122:errgroup.(*Group).Go.func1()
API: SYSTEM()
Time: 02:43:49 UTC 03/31/2021
Error: Marking http://minio-test-0.minio-test-headless.observability.svc.cluster.local:9000/minio/storage/data/v29 temporary offline; caused by Post "http://minio-test-0.minio-test-headless.observability.svc.cluster.local:9000/minio/storage/data/v29/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 192.168.9.48:9000: connect: connection refused (*fmt.wrapError)
       6: cmd/rest/client.go:138:rest.(*Client).Call()
       5: cmd/storage-rest-client.go:151:cmd.(*storageRESTClient).call()
       4: cmd/storage-rest-client.go:471:cmd.(*storageRESTClient).ReadAll()
       3: cmd/format-erasure.go:405:cmd.loadFormatErasure()
       2: cmd/format-erasure.go:325:cmd.loadFormatErasureAll.func1()
       1: pkg/sync/errgroup/errgroup.go:122:errgroup.(*Group).Go.func1()
Waiting for a minimum of 2 disks to come online (elapsed 8s)
API: SYSTEM()
Time: 02:43:49 UTC 03/31/2021
Error: Access Denied. (*errors.errorString)
       requestHeaders={"method":"GET","reqURI":"/minio/admin/v3/info","header":{"Host":["localhost:9000"],"User-Agent":["MinIO (linux; amd64) madmin-go/0.0.1 mc/DEVELOPMENT.2021-03-23T09-13-19Z"],"X-Amz-Content-Sha256":["e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"]}}
       4: cmd/auth-handler.go:143:cmd.validateAdminSignature()
       3: cmd/auth-handler.go:159:cmd.checkAdminRequestAuth()
       2: cmd/admin-handlers.go:1520:cmd.adminAPIHandlers.ServerInfoHandler()
       1: net/http/server.go:2069:http.HandlerFunc.ServeHTTP()
 02:43:49.33 INFO  ==> MinIO is already stopped...
stream closed
marcosbc commented 3 years ago

Hi @ethernoy, the error looks like it could be related to there being an existing PVC:

Error: Access Denied. (*errors.errorString)

I still haven't got confirmation from your side that you've checked if that could be the case. You can get the list of PVCs with kubectl get pvc.

Could you recommend in another namespace and/or release name and check if it works? Make sure the release name is unique in the namespace (i.e. miniotest-123-unique) or you will get the same errors.

ethernoy commented 3 years ago

Hi @ethernoy, the error looks like it could be related to there being an existing PVC:

Error: Access Denied. (*errors.errorString)

I still haven't got confirmation from your side that you've checked if that could be the case. You can get the list of PVCs with kubectl get pvc.

Could you recommend in another namespace and/or release name and check if it works? Make sure the release name is unique in the namespace (i.e. miniotest-123-unique) or you will get the same errors.

I just tested two cases:

  1. uninstall minio on the same namespace, delete all related pvc, then reinstall
  2. install minio on a different namespace using a different release name

Both test resulted in the same “Access Denied” error we discussed above

marcosbc commented 3 years ago

Hi @ethernoy, I'm checking your configuration and I don't understand this:

securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 0

That will cause Minio to run as a root user. Currently it is not supported by the Docker image as there seems to be a bug where the minio user is not created before starting the container:

 10:35:36.00 INFO  ==> ** Starting MinIO **
error: failed switching to "minio": unable to find user minio: no matching entries in passwd file

If I remove that configuration, I'm able to workaround that error. Could you try it?

ethernoy commented 3 years ago

Hi @ethernoy, I'm checking your configuration and I don't understand this:

securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 0

That will cause Minio to run as a root user. Currently it is not supported by the Docker image as there seems to be a bug where the minio user is not created before starting the container:

 10:35:36.00 INFO  ==> ** Starting MinIO **
error: failed switching to "minio": unable to find user minio: no matching entries in passwd file

If I remove that configuration, I'm able to workaround that error. Could you try it?

I believe it is not related to runAsRoot configuration. I tested under following cases:

Here is the non-root values.yaml I used:

global:
  imagePullSecrets: 
  - platform-tool-docker-repo
  storageClass: dev-cld-st01-storage-policy
image:
  registry: {repository_link}
  repository: observability/bitnami/minio
  tag: 2021.3.26-debian-10-r1
  pullPolicy: Always
  debug: true
clientImage:
  registry: {repository_link}
  repository: observability/bitnami/minio-client
  tag: 2021.3.23-debian-10-r5
mode: distributed
accessKey:
  password: thanos123
  forcePassword: true
secretKey:
  password: thanos123
  forcePassword: true
defaultBuckets: "thanos"
statefulset:
  updateStrategy: RollingUpdate
  podManagementPolicy: Parallel
  replicaCount: 4
  zones: 1
  drivesPerNode: 1
resources:
  limits:
    cpu: 300m
    memory: 512Mi
  requests:
    cpu: 256m
    memory: 256Mi
persistence:
  size: 10Gi

Installing Bitnami MinIO using this values.yaml still results in access denied error: image

marcosbc commented 3 years ago

Hi, I'm going to forward this case to @juan131 who has more experience with MinIO. I'm finding some issues myself, although not related to your error (which I'm able to get past without issues).

In the meantime, it would be great if you could share more specs on your Kubernetes cluster. For instance, is it a vanilla Kubernetes cluster or are you running on a Kubernetes distribution? Could it also be that you are running a MinIO image based on Photon, from TAC?

juan131 commented 3 years ago

Hi @ethernoy

I agree with @marcosbc that the "securityContext" shouldn't be forcing the container to run as user "0", so please ensure you're including the section below in your values.yaml:

securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 1001

You'll have to ensure that your cluster has support for changing the ownership and permissions for the contents of each volume. See:

Also, we're finding some issues with the "default buckets" feature in distributed mode. Therefore, I recommend you to remove the parameter defaultBuckets from your parameters and create manually your buckets after MinIO cluster is up and running.

ethernoy commented 3 years ago

Hi @ethernoy

I agree with @marcosbc that the "securityContext" shouldn't be forcing the container to run as user "0", so please ensure you're including the section below in your values.yaml:

securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 1001

You'll have to ensure that your cluster has support for changing the ownership and permissions for the contents of each volume. See:

Also, we're finding some issues with the "default buckets" feature in distributed mode. Therefore, I recommend you to remove the parameter defaultBuckets from your parameters and create manually your buckets after MinIO cluster is up and running.

Hi, just double confirmed, when deploying with the default securityContext and disabled default bucket, Bitnami MinIO can work normally.

juan131 commented 3 years ago

Great!! I'll set a reminder to myself to see how can we modify the approach we're using to create the "default buckets" with the distributed mode. It's clearly not working as expected.

ethernoy commented 3 years ago

Great!! I'll set a reminder to myself to see how can we modify the approach we're using to create the "default buckets" with the distributed mode. It's clearly not working as expected.

I am curious, is this issue related only to the Bitnami MinIO chart and can be solved by chart modification solely, or is it related to the Bitnami MinIO docker image too?

juan131 commented 3 years ago

I'd say that both @ethernoy

It can be solved by improving the logic in the Bitnami MinIO container image which is not properly managing the buckets' creation when a distributed mode is used. However, we can also take a completely different approach and delegate the default buckets creation on a Kubernetes job that creates them once the MinIO cluster is up and ready. In this case, we will implement the solution in the Minio chart without modifying the current logic of the container.

github-actions[bot] commented 3 years ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

ZILosoft commented 2 years ago

any news?

carrodher commented 2 years ago

Unfortunately, there was no internal progress on this task and I'm afraid that, if it was not prioritized during this time, there's not much chance we'll be working on it in the short term. Since we are a small team maintaining a lot of assets it is difficult to find the bandwidth to implement all the requests.

Being said that, thanks for reporting this issue and to be on top of it. Would you like to contribute by creating a PR to solve the issue? The Bitnami team will be happy to review it and provide feedback. Here you can find the contributing guidelines.

github-actions[bot] commented 2 years ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] commented 2 years ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.

papierkorp commented 1 year ago

I had this problem with a minikube, since i just needed it for testing purposes I added --set persistence.enabled="false"

The Full Command being:

helm install minio bitnami/minio --namespace minio --create-namespace --set image.debug="true" --set service.type="ClusterIP" --set persistence.enabled="false"
gruberdev commented 11 months ago

I had this problem with a minikube, since i just needed it for testing purposes I added --set persistence.enabled="false"

The Full Command being:

helm install minio bitnami/minio --namespace minio --create-namespace --set image.debug="true" --set service.type="ClusterIP" --set persistence.enabled="false"

If there's no storage system or persistence, it makes sense that this error would not occur.

I would argue it is not relevant to the issue discussed here.