Closed 0hlov3 closed 2 years ago
You just have to set PROMETHEUS_NAMESPACE
as environment variable to the minio-controller deployments.
sample:
env:
- name: PROMETHEUS_NAMESPACE
value: "monitoring"
it's documented there : https://github.com/minio/operator/blob/6cf1612e9b64a6b400394ac5f01353953d58fa37/UPGRADE.md#v439---v440
Thank you very much, I must have searched for 4 hours and just couldn't find it or overlooked it.
@cyril-corbon Hi! Thanks for your advice. It is very important and useful. But I'd like to see the explicit option to select the Prometheus namespace in the HELM chart of minio-operator. Because it is really ambiguos to find the solution. Thanks Gods for this nice issue tracker!!!
error syncing 'minio-tenant/minio': No prometheus found on namespace victoria-metrics
Users can build a monitoring system not only based on the Prometheus operator. They can also use VictoriaMetrics or the Grafana stack. Anyway, it looks like it's not a big deal. It is possible to generate config manually. Fortunately, the operator source code looks nice and easy to understand, so I found out how it generates that configuration in ~5 minutes. Hotfix will look like this:
- job_name: 'minio'
scheme: https
metrics_path: "/minio/v2/metrics/cluster"
bearer_token: "your_token"
tls_config:
insecure_skip_verify: true
static_configs:
- targets: ["minio-hl.minio-tenant.svc.cluster.local:9000"]
We installed the Minio-Operator and the Minto-Tenant with the HELM-Charts, so we configured our chart to use the prometheusOperator, everything except the PrometheusScrapeConfig seems to work as expected in the Operator logs are messages like:
Expected Behavior
As Described in crd.adoc prometheusOperator Directs the MinIO Operator to use prometheus operator. Tenant scrape configuration will be added to prometheus managed by the prometheus-operator.
As the Prometheus-Operator is located in the Monitoring Namespace it Seems that the Minio-Operator does not find the Prometheus-Operator.
Current Behavior
The Minio-Operator generates a Prometheus by itself and writes logs like:
Possible Solution
Steps to Reproduce (for bugs)
prometheusOperator: true
Context
Regression
Your Environment
minio-operator
): v4.4.28uname -a
):MinIO Tenant Definition
tenant:
Tenant name
name: minio1
Registry location and Tag to download MinIO Server image
image: repository: quay.io/minio/minio tag: RELEASE.2022-07-24T17-09-31Z pullPolicy: IfNotPresent
Customize any private registry image pull secret.
currently only one secret registry is supported
imagePullSecret: { }
If a scheduler is specified here, Tenant pods will be dispatched by specified scheduler.
If not specified, the Tenant pods will be dispatched by default scheduler.
scheduler: { }
Secret name that contains additional environment variable configurations.
The secret is expected to have a key named config.env containing environment variables exports.
configuration: name: minio1-env-configuration
Specification for MinIO Pool(s) in this Tenant.
pools:
Servers specifies the number of MinIO Tenant Pods / Servers in this pool.
Mount path where PV will be mounted inside container(s).
mountPath: /export
Sub path inside Mount path where MinIO stores data.
subPath: /data
pool metrics to be read by Prometheus
metrics: enabled: false port: 9000 protocol: http certificate:
Use this field to provide one or more external CA certificates. This is used by MinIO
MinIO features to enable or disable in the MinIO Tenant
https://github.com/minio/operator/blob/master/docs/crd.adoc#features
features: bucketDNS: true domains: { }
List of bucket names to create during tenant provisioning
buckets: [ ]
List of secret names to use for generating MinIO users during tenant provisioning
users: [ ]
PodManagement policy for MinIO Tenant Pods. Can be "OrderedReady" or "Parallel"
Refer https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pod-management-policy
for details.
podManagementPolicy: Parallel
Liveness Probe for container liveness. Container will be restarted if the probe fails.
Refer https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.
liveness: { }
Readiness Probe for container readiness. Container will be removed from service endpoints if the probe fails.
Refer https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
readiness: { }
exposeServices defines the exposure of the MinIO object storage and Console services.
service is exposed as a loadbalancer in k8s service.
exposeServices: minio: true condole: true
kubernetes service account associated with a specific tenant
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccountName: ""
Tenant scrape configuration will be added to prometheus managed by the prometheus-operator.
prometheusOperator: false
Enable JSON, Anonymous logging for MinIO tenants.
Refer https://github.com/minio/operator/blob/master/pkg/apis/minio.min.io/v2/types.go#L303
How logs will look:
$ k logs minio1-pool-0-0 -n default
{"level":"INFO","errKind":"","time":"2022-04-07T21:49:33.740058549Z","message":"All MinIO sub-systems initialized successfully"}
Notice they are in JSON format to be consumed
logging: anonymous: true json: true quiet: true
serviceMetadata allows passing additional labels and annotations to MinIO and Console specific
services created by the operator.
serviceMetadata: { }
Add environment variables to be set in MinIO container (https://github.com/minio/minio/tree/master/docs/config)
env:
PriorityClassName indicates the Pod priority and hence importance of a Pod relative to other Pods.
This is applied to MinIO pods only.
Refer Kubernetes documentation for details https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass/
priorityClassName: ""
Define configuration for KES (stateless and distributed key-management system)
Refer https://github.com/minio/kes
kes:
image: "" # minio/kes:v0.18.0
env: [ ]
replicas: 2
configuration: |-
address: :7373
root: _ # Effectively disabled since no root identity necessary.
tls:
key: /tmp/kes/server.key # Path to the TLS private key
cert: /tmp/kes/server.crt # Path to the TLS certificate
proxy:
identities: []
header:
cert: X-Tls-Client-Cert
policy:
my-policy:
paths:
- /v1/key/create/*
- /v1/key/generate/*
- /v1/key/decrypt/*
identities:
- ${MINIO_KES_IDENTITY}
cache:
expiry:
any: 5m0s
unused: 20s
log:
error: on
audit: off
keys:
KES configured with fs (File System mode) doesnt work in Kubernetes environments and it's not recommended
use a real KMS
fs:
path: "./keys" # Path to directory. Keys will be stored as files. Not Recommended for Production.
vault:
endpoint: "http://vault.default.svc.cluster.local:8200" # The Vault endpoint
namespace: "" # An optional Vault namespace. See: https://www.vaultproject.io/docs/enterprise/namespaces/index.html
prefix: "my-minio" # An optional K/V prefix. The server will store keys under this prefix.
approle: # AppRole credentials. See: https://www.vaultproject.io/docs/auth/approle.html
id: "" # Your AppRole Role ID
secret: "" # Your AppRole Secret ID
retry: 15s # Duration until the server tries to re-authenticate after connection loss.
tls: # The Vault client TLS configuration for mTLS authentication and certificate verification
key: "" # Path to the TLS client private key for mTLS authentication to Vault
cert: "" # Path to the TLS client certificate for mTLS authentication to Vault
ca: "" # Path to one or multiple PEM root CA certificates
status: # Vault status configuration. The server will periodically reach out to Vault to check its status.
ping: 10s # Duration until the server checks Vault's status again.
aws:
The AWS SecretsManager key store. The server will store
secret keys at the AWS SecretsManager encrypted with
AWS-KMS. See: https://aws.amazon.com/secrets-manager
secretsmanager:
endpoint: "" # The AWS SecretsManager endpoint - e.g.: secretsmanager.us-east-2.amazonaws.com
region: "" # The AWS region of the SecretsManager - e.g.: us-east-2
kmskey: "" # The AWS-KMS key ID used to en/decrypt secrets at the SecretsManager. By default (if not set) the default AWS-KMS key will be used.
credentials: # The AWS credentials for accessing secrets at the AWS SecretsManager.
accesskey: "" # Your AWS Access Key
secretkey: "" # Your AWS Secret Key
token: "" # Your AWS session token (usually optional)
imagePullPolicy: "IfNotPresent"
externalCertSecret: null
clientCertSecret: null
Key name to be created on the KMS, default is "my-minio-key"
keyName: ""
resources: { }
nodeSelector: { }
affinity:
nodeAffinity: { }
podAffinity: { }
podAntiAffinity: { }
tolerations: [ ]
annotations: { }
labels: { }
serviceAccountName: ""
securityContext:
runAsUser: 1000
runAsGroup: 1000
runAsNonRoot: true
fsGroup: 1000
Prometheus setup for MinIO Tenant.
prometheus: image: "" # defaults to quay.io/prometheus/prometheus:latest env: [ ] sidecarimage: "" # defaults to alpine initimage: "" # defaults to busybox:1.33.1 diskCapacityGB: 1 storageClassName: standard annotations: { } labels: { } nodeSelector: { } affinity: nodeAffinity: { } podAffinity: { } podAntiAffinity: { } resources: { } serviceAccountName: "" securityContext: runAsUser: 1000 runAsGroup: 1000 runAsNonRoot: true fsGroup: 1000
LogSearch API setup for MinIO Tenant.
log: image: "" # defaults to minio/operator:v4.4.17 env: [ ] resources: { } nodeSelector: { } affinity: nodeAffinity: { } podAffinity: { } podAntiAffinity: { } tolerations: [ ] annotations: { } labels: { } audit: diskCapacityGB: 1
Postgres setup for LogSearch API
db: image: "" # defaults to library/postgres env: [ ] initimage: "" # defaults to busybox:1.33.1 volumeClaimTemplate: metadata: { } spec: storageClassName: standard accessModes:
ingress: api: enabled: false ingressClassName: "" labels: { } annotations: { } tls: [ ] host: minio.local path: / pathType: Prefix console: enabled: false ingressClassName: "" labels: { } annotations: { } tls: [ ] host: minio-console.local path: / pathType: Prefix