minio / operator

Simple Kubernetes Operator for MinIO clusters :computer:
https://min.io/docs/minio/kubernetes/upstream/index.html
GNU Affero General Public License v3.0
1.18k stars 447 forks source link

Cannot login with minio console after tenant deployment 401 Unauthorized #2186

Closed modpy44 closed 2 weeks ago

modpy44 commented 2 months ago

Expected Behavior

I can login to the minio console with the tenant root username and password

Current Behavior

Invalid login Screenshot from 2024-06-26 15-50-27 – ScreenClip

Screenshot from 2024-06-26 20-30-28 – ScreenClip (1) (1)

Steps to Reproduce (for bugs)

  1. Install minio operator v5.0.15
  2. deploy a new tenant

Your Environment

Tenant manifest


metadata:
  creationTimestamp: "2024-06-24T10:57:09Z"
  generation: 4
  name: test-tenant
  namespace: minio-cluster
  resourceVersion: "1388959"
  uid: c7b266e0-8778-4897-b7f4-ff92a9c88422
scheduler:
  name: ""
spec:
  configuration:
    name: test-tenant-env-configuration
  credsSecret:
    name: test-tenant-secret
  exposeServices:
    console: true
    minio: true
  features:
    domains:
      minio:
      - http://s3.example.com
    enableSFTP: false
  image: minio/minio:latest
  imagePullSecret: {}
  mountPath: /export
  pools:
  - affinity:
      podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
            - key: v1.min.io/tenant
              operator: In
              values:
              - test-tenant
            - key: v1.min.io/pool
              operator: In
              values:
              - pool-0
          topologyKey: kubernetes.io/hostname
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsGroup: 1000
      runAsNonRoot: true
      runAsUser: 1000
      seccompProfile:
        type: RuntimeDefault
    name: pool-0
    resources:
      requests:
        cpu: "1"
        memory: 2Gi
    runtimeClassName: ""
    securityContext:
      fsGroup: 1000
      fsGroupChangePolicy: OnRootMismatch
      runAsGroup: 1000
      runAsNonRoot: true
      runAsUser: 1000
    servers: 2
    volumeClaimTemplate:
      metadata:
        creationTimestamp: null
        name: data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: "2684354560"
        storageClassName: local-path
      status: {}
    volumesPerServer: 4
  requestAutoCert: true
  users:
  - name: test-tenant-user-0
status:
  availableReplicas: 2
  certificates:
    autoCertEnabled: true
    customCertificates: {}
  currentState: Initialized
  drivesOnline: 8
  healthStatus: green
  pools:
  - legacySecurityContext: false
    ssName: test-tenant-pool-0
    state: PoolInitialized
  provisionedUsers: true
  revision: 0
  syncVersion: v5.0.0
  usage:
    capacity: 677462163456
    rawCapacity: 21474836480
    rawUsage: 130251833344
    usage: 130251833344
  writeQuorum: 5
cesnietor commented 2 months ago

@modpy44 are these credentials the same as the ones defined in the test-tenant-env-configuration secret ? (you'd need to base64 decode them)

modpy44 commented 2 months ago

@cesnietor checked the secret and decoded it, they are the same credentials

cesnietor commented 2 months ago

@modpy44 We'll try to reproduce it. This might be obvious but do you have any other MinIO tenants? is this the same one you are trying to fetch?

modpy44 commented 2 months ago

@cesnietor yes it's the only tenant I've deployed, let me know if you need more infos to reproduce

allanrogerr commented 2 months ago

Can you access the the minio operator console with the generated jwt? See https://min.io/docs/minio/kubernetes/upstream/operations/installation.html (Retrieve the Operator Console JWT for login)

cesnietor commented 2 months ago

@modpy44 I tested with that version 5.0.15 in a new k8s env and I can't reproduce the issue. Steps were:

It all depends how you are exposing the service. I modified my service to use NodePort for testing purposes and I was able to connect to it just fine. This might be a network issue. What's the response of the request?

modpy44 commented 2 months ago

@cesnietor thanks, the default service type is Loadbalancer in my setup and it's stuck in pending state, how did you change to Nodeport service ?

allanrogerr commented 2 months ago

1.-

Can you access the the minio operator console with the generated jwt? See https://min.io/docs/minio/kubernetes/upstream/operations/installation.html (Retrieve the Operator Console JWT for login)

PTAL. You should be able to hop from the minio-operator console to the tenant console using the Management Console icon

image

2.- I notice this non-existent domain in your tenant yaml. Could you recreate the tenant from scratch please?

  features:
    domains:
      minio:
      - http://s3.example.com

3.- Nodeport can be setup using step 3 (Optional) Configure access to the Operator Console service of https://min.io/docs/minio/kubernetes/upstream/operations/installation.html.

modpy44 commented 1 month ago

@cesnietor I have another problem in the logs, that might be behind the problem

kubectl logs -f test-tenant-pool-0-1
Defaulted container "minio" out of: minio, sidecar, validate-arguments (init)
Unable to use the drive https://test-tenant-pool-0-0.test-tenant-hl.minio-cluster.svc.cluster.local:9000/export0: drive not found
Unable to use the drive https://test-tenant-pool-0-0.test-tenant-hl.minio-cluster.svc.cluster.local:9000/export1: drive not found

API: SYSTEM.storage
Time: 10:26:50 UTC 07/07/2024
DeploymentID: 52f90ae7-64b7-44cf-baf2-8ef9d8296e50
Error: Write quorum could not be established on pool: 0, set: 0, expected write quorum: 3, drives-online: 2 (*errors.errorString)
       maintenance="false"
       6: internal/logger/logger.go:268:logger.LogIf()
       5: cmd/logging.go:156:cmd.storageLogIf()
       4: cmd/erasure-server-pool.go:2610:cmd.(*erasureServerPools).Health()
       3: cmd/server-main.go:924:cmd.serverMain.func11()
       2: cmd/server-main.go:561:cmd.bootstrapTrace()
       1: cmd/server-main.go:923:cmd.serverMain()
Waiting for all MinIO sub-systems to be initialize...
Configured max API requests per node based on available memory: 252
All MinIO sub-systems initialized successfully in 1.935451ms
MinIO Object Storage Server
Copyright: 2015-2024 MinIO, Inc.
License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html
Version: RELEASE.2024-07-04T14-25-45Z (go1.22.5 linux/amd64)

API: http://s3.ins4trk.com 
WebUI: http://minio-console.ins4trk.com 

Docs: https://min.io/docs/minio/linux/index.html
Use `mc admin info` to look for latest server/drive info
 Status:         2 Online, 2 Offline. 

API: SYSTEM.peers
Time: 10:26:50 UTC 07/07/2024
DeploymentID: 52f90ae7-64b7-44cf-baf2-8ef9d8296e50
Error: Drive: https://test-tenant-pool-0-0.test-tenant-hl.minio-cluster.svc.cluster.local:9000/export1 returned drive not found (*fmt.wrapError)
       endpoint="https://test-tenant-pool-0-0.test-tenant-hl.minio-cluster.svc.cluster.local:9000/export1"
       4: internal/logger/logger.go:258:logger.LogAlwaysIf()
       3: cmd/logging.go:65:cmd.peersLogAlwaysIf()
       2: cmd/prepare-storage.go:51:cmd.init.func22.1()
       1: cmd/erasure-sets.go:227:cmd.(*erasureSets).connectDisks.func2()

API: SYSTEM.peers
Time: 10:26:50 UTC 07/07/2024
DeploymentID: 52f90ae7-64b7-44cf-baf2-8ef9d8296e50
Error: Drive: https://test-tenant-pool-0-0.test-tenant-hl.minio-cluster.svc.cluster.local:9000/export0 returned drive not found (*fmt.wrapError)
       endpoint="https://test-tenant-pool-0-0.test-tenant-hl.minio-cluster.svc.cluster.local:9000/export0"
       4: internal/logger/logger.go:258:logger.LogAlwaysIf()
       3: cmd/logging.go:65:cmd.peersLogAlwaysIf()
       2: cmd/prepare-storage.go:51:cmd.init.func22.1()
       1: cmd/erasure-sets.go:227:cmd.(*erasureSets).connectDisks.func2()
allanrogerr commented 1 month ago

Yes, your pods are not online. Check kubectl -n <tenant namespace> get pods. You may need to describe the pod that cannot start, to find the reason it cannot start.

ramondeklein commented 2 weeks ago

User didn't reply and this is a problem with disks not being available instead of a console issue.