Closed modpy44 closed 2 weeks ago
@modpy44 are these credentials the same as the ones defined in the test-tenant-env-configuration
secret ? (you'd need to base64 decode them)
@cesnietor checked the secret and decoded it, they are the same credentials
@modpy44 We'll try to reproduce it. This might be obvious but do you have any other MinIO tenants? is this the same one you are trying to fetch?
@cesnietor yes it's the only tenant I've deployed, let me know if you need more infos to reproduce
Can you access the the minio operator console with the generated jwt? See https://min.io/docs/minio/kubernetes/upstream/operations/installation.html (Retrieve the Operator Console JWT for login)
@modpy44 I tested with that version 5.0.15 in a new k8s env and I can't reproduce the issue. Steps were:
It all depends how you are exposing the service. I modified my service to use NodePort for testing purposes and I was able to connect to it just fine. This might be a network issue. What's the response of the request?
@cesnietor thanks, the default service type is Loadbalancer in my setup and it's stuck in pending state, how did you change to Nodeport service ?
1.-
Can you access the the minio operator console with the generated jwt? See https://min.io/docs/minio/kubernetes/upstream/operations/installation.html (Retrieve the Operator Console JWT for login)
PTAL. You should be able to hop from the minio-operator console to the tenant console using the Management Console icon
2.- I notice this non-existent domain in your tenant yaml. Could you recreate the tenant from scratch please?
features:
domains:
minio:
- http://s3.example.com
3.- Nodeport can be setup using step 3 (Optional) Configure access to the Operator Console service
of https://min.io/docs/minio/kubernetes/upstream/operations/installation.html.
@cesnietor I have another problem in the logs, that might be behind the problem
kubectl logs -f test-tenant-pool-0-1
Defaulted container "minio" out of: minio, sidecar, validate-arguments (init)
Unable to use the drive https://test-tenant-pool-0-0.test-tenant-hl.minio-cluster.svc.cluster.local:9000/export0: drive not found
Unable to use the drive https://test-tenant-pool-0-0.test-tenant-hl.minio-cluster.svc.cluster.local:9000/export1: drive not found
API: SYSTEM.storage
Time: 10:26:50 UTC 07/07/2024
DeploymentID: 52f90ae7-64b7-44cf-baf2-8ef9d8296e50
Error: Write quorum could not be established on pool: 0, set: 0, expected write quorum: 3, drives-online: 2 (*errors.errorString)
maintenance="false"
6: internal/logger/logger.go:268:logger.LogIf()
5: cmd/logging.go:156:cmd.storageLogIf()
4: cmd/erasure-server-pool.go:2610:cmd.(*erasureServerPools).Health()
3: cmd/server-main.go:924:cmd.serverMain.func11()
2: cmd/server-main.go:561:cmd.bootstrapTrace()
1: cmd/server-main.go:923:cmd.serverMain()
Waiting for all MinIO sub-systems to be initialize...
Configured max API requests per node based on available memory: 252
All MinIO sub-systems initialized successfully in 1.935451ms
MinIO Object Storage Server
Copyright: 2015-2024 MinIO, Inc.
License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html
Version: RELEASE.2024-07-04T14-25-45Z (go1.22.5 linux/amd64)
API: http://s3.ins4trk.com
WebUI: http://minio-console.ins4trk.com
Docs: https://min.io/docs/minio/linux/index.html
Use `mc admin info` to look for latest server/drive info
Status: 2 Online, 2 Offline.
API: SYSTEM.peers
Time: 10:26:50 UTC 07/07/2024
DeploymentID: 52f90ae7-64b7-44cf-baf2-8ef9d8296e50
Error: Drive: https://test-tenant-pool-0-0.test-tenant-hl.minio-cluster.svc.cluster.local:9000/export1 returned drive not found (*fmt.wrapError)
endpoint="https://test-tenant-pool-0-0.test-tenant-hl.minio-cluster.svc.cluster.local:9000/export1"
4: internal/logger/logger.go:258:logger.LogAlwaysIf()
3: cmd/logging.go:65:cmd.peersLogAlwaysIf()
2: cmd/prepare-storage.go:51:cmd.init.func22.1()
1: cmd/erasure-sets.go:227:cmd.(*erasureSets).connectDisks.func2()
API: SYSTEM.peers
Time: 10:26:50 UTC 07/07/2024
DeploymentID: 52f90ae7-64b7-44cf-baf2-8ef9d8296e50
Error: Drive: https://test-tenant-pool-0-0.test-tenant-hl.minio-cluster.svc.cluster.local:9000/export0 returned drive not found (*fmt.wrapError)
endpoint="https://test-tenant-pool-0-0.test-tenant-hl.minio-cluster.svc.cluster.local:9000/export0"
4: internal/logger/logger.go:258:logger.LogAlwaysIf()
3: cmd/logging.go:65:cmd.peersLogAlwaysIf()
2: cmd/prepare-storage.go:51:cmd.init.func22.1()
1: cmd/erasure-sets.go:227:cmd.(*erasureSets).connectDisks.func2()
Yes, your pods are not online. Check kubectl -n <tenant namespace> get pods
. You may need to describe the pod that cannot start, to find the reason it cannot start.
User didn't reply and this is a problem with disks not being available instead of a console issue.
Expected Behavior
I can login to the minio console with the tenant root username and password
Current Behavior
Invalid login
Steps to Reproduce (for bugs)
Your Environment
uname -a
): 5.15.0-112-generic #122-Ubuntu SMP Thu May 23 07:48:21 UTC 2024 x86_64 x86_64 x86_64 GNU/LinuxTenant manifest