Closed maxiar closed 1 month ago
I think you should fix the url:
http//lgtm-minio:9000
orhttp//lgtm-minio:9000/loki
Perfect, after try during 2hrs... the correct way for url was:
For Loki:
loki-stack:
loki:
enabled: true
image:
repository: grafana/loki
tag: 2.8.1
schema_config:
configs:
- from: "2024-09-01"
index:
period: 24h
prefix: index_
object_store: s3
schema: v11
store: boltdb-shipper
storage_config:
boltdb_shipper:
active_index_directory: /data/loki/index
cache_location: /data/loki/cache
shared_store: s3
filesystem:
directory: /data/loki/chunks
aws:
s3: http://admin:XXXXX@lgtm-minio.monitoring.svc.cluster.local:9000/loki
s3forcepathstyle: true
insecure: true
For Tempo:
tempo:
enabled: true
tempo:
repository: grafana/tempo
tag: ""
server:
http_listen_port: 3100 # HTTP server listen port
multitenancy_enabled: false
usage_report:
reporting_enabled: true
compactor:
compaction:
block_retention: 24h
distributor:
receivers:
jaeger:
protocols:
grpc:
endpoint: 0.0.0.0:14250
thrift_binary:
endpoint: 0.0.0.0:6832
thrift_compact:
endpoint: 0.0.0.0:6831
thrift_http:
endpoint: 0.0.0.0:14268
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
ingester:
max_block_duration: 5m #https://community.grafana.com/t/tempo-ram-usage-for-6k-spans-per-hour/63801/13
server:
http_listen_port: 3100
storage:
trace:
# tempo storage backend
# refer https://grafana.com/docs/tempo/latest/configuration/
## Use minio s3 for example
backend: s3
# store traces in s3
s3:
bucket: tempo # store traces in this bucket
endpoint: "lgtm-minio.monitoring.svc.cluster.local:9000" # api endpoint
access_key: admin # optional. access key when using static credentials.
secret_key: XXXXXXXX # optional. secret key when using static credentials.
insecure: true
querier:
{}
query_frontend:
{}
overrides:
per_tenant_override_config: /conf/overrides.yaml
For Mimir:
mimir:
enabled: true
image:
repository: grafana/mimir
# Overrides the image tag whose default is the chart appVersion.
tag: "2.11.0"
multitenancy_enabled: false
common:
storage:
backend: s3
s3:
endpoint: "lgtm-minio.monitoring.svc.cluster.local:9000"
access_key_id: admin
secret_access_key: XXXXXX
insecure: true
bucket_name: mimir
Please document it..
Thanks in advanced!
Hi! First! Excelent works! It's the best lgtm-minimal and standalone lgtm stack installation, all works fine, except that loki is not sendind the logs to minio storage, it's persisting all locally
The tempo and mimir are storage fine in minio, but no loki:
Loki is working fine but with local storage:
With no ERROR logs in the loki pod:
level=info ts=2024-09-06T20:33:25.112019053Z caller=lifecycler.go:576 msg="instance not found in ring, adding with no tokens" ring=ingester level=info ts=2024-09-06T20:33:25.112354453Z caller=lifecycler.go:416 msg="auto-joining cluster after timeout" ring=ingester level=info ts=2024-09-06T20:33:25.11265887Z caller=wal.go:156 msg=started component=wal ts=2024-09-06T20:33:25.116200049Z caller=memberlist_logger.go:74 level=warn msg="Failed to resolve lgtm-loki-memberlist: lookup lgtm-loki-memberlist on 10.43.0.10:53: no such host" level=warn ts=2024-09-06T20:33:25.116253406Z caller=memberlist_client.go:598 msg="joining memberlist cluster: failed to reach any nodes" retries=0 err="1 error occurred:\n\t Failed to resolve lgtm-loki-memberlist: lookup lgtm-loki-memberlist on 10.43.0.10:53: no such host\n\n" level=info ts=2024-09-06T20:33:26.11212947Z caller=scheduler.go:630 msg="waiting until scheduler is ACTIVE in the ring" level=info ts=2024-09-06T20:33:26.112164319Z caller=compactor.go:346 msg="waiting until compactor is ACTIVE in the ring" level=info ts=2024-09-06T20:33:26.251353423Z caller=scheduler.go:634 msg="scheduler is ACTIVE in the ring" level=info ts=2024-09-06T20:33:26.25155165Z caller=module_service.go:82 msg=initialising module=querier level=info ts=2024-09-06T20:33:26.251605731Z caller=module_service.go:82 msg=initialising module=query-frontend level=info ts=2024-09-06T20:33:26.303493614Z caller=compactor.go:350 msg="compactor is ACTIVE in the ring" level=info ts=2024-09-06T20:33:26.303581034Z caller=loki.go:499 msg="Loki started" ts=2024-09-06T20:33:27.038720633Z caller=memberlist_logger.go:74 level=warn msg="Failed to resolve lgtm-loki-memberlist: lookup lgtm-loki-memberlist on 10.43.0.10:53: no such host" level=warn ts=2024-09-06T20:33:27.038776101Z caller=memberlist_client.go:598 msg="joining memberlist cluster: failed to reach any nodes" retries=1 err="1 error occurred:\n\t Failed to resolve lgtm-loki-memberlist: lookup lgtm-loki-memberlist on 10.43.0.10:53: no such host\n\n" level=info ts=2024-09-06T20:33:29.25210275Z caller=scheduler.go:681 msg="this scheduler is in the ReplicationSet, will now accept requests." level=info ts=2024-09-06T20:33:29.252147818Z caller=worker.go:209 msg="adding connection" addr=10.42.1.70:9095 level=info ts=2024-09-06T20:33:30.518759777Z caller=memberlist_client.go:595 msg="joining memberlist cluster succeeded" reached_nodes=1 elapsed_time=5.410065132s level=info ts=2024-09-06T20:33:31.304598636Z caller=compactor.go:411 msg="this instance has been chosen to run the compactor, starting compactor" level=info ts=2024-09-06T20:33:31.304700456Z caller=compactor.go:440 msg="waiting 10m0s for ring to stay stable and previous compactions to finish before starting compactor" level=info ts=2024-09-06T20:33:36.251903524Z caller=frontend_scheduler_worker.go:107 msg="adding connection to scheduler" addr=10.42.1.70:9095 level=info ts=2024-09-06T20:34:25.087614709Z caller=table_manager.go:166 msg="handing over indexes to shipper" level=info ts=2024-09-06T20:34:25.087604732Z caller=table_manager.go:134 msg="uploading tables" Logs from 9/6/2024, 5:33:25 PM
I'm using the lgtm-minimal chart 2.10 version, wth this config:
`
values.yaml for LGTM Helm Chart
Grafana configuration
grafana: enabled: true image: repository: grafana/grafana tag: ""
Administrator credentials when not using an existing secret (see below)
adminUser: admin adminPassword: admin grafana.ini: dataproxy: max_idle_connections: 500 sidecar: datasources: enabled: true dashboards: enabled: true datasources: datasources.yaml: apiVersion: 1 datasources:
url will be interpreted as query for the datasource
tags: ['namespace', 'pod']
Loki configuration
loki-stack: loki: enabled: true image: repository: grafana/loki tag: 2.8.1 storage_config: boltdb_shipper: active_index_directory: /loki/index cache_location: /loki/index_cache shared_store: s3 aws: s3: http://admin:supersecret@http://lgtm-minio s3forcepathstyle: true
promtail: enabled: false
fluent-bit: enabled: false
grafana: enabled: false
prometheus: enabled: false
filebeat: enabled: false
logstash: enabled: false image: grafana/logstash-output-loki imageTag: 1.0.1
Tempo configuration
tempo: enabled: true tempo: repository: grafana/tempo tag: "" server: http_listen_port: 3100 # HTTP server listen port storage: trace:
tempo storage backend
Mimir configuration
mimir: enabled: true image: repository: grafana/mimir
Overrides the image tag whose default is the chart appVersion.
multitenancy_enabled: false common: storage: backend: s3 s3: endpoint: "lgtm-minio:9000" access_key_id: admin secret_access_key: supersecret insecure: true bucket_name: mimir
Minio configuration
minio: enabled: true image: registry: docker.io repository: bitnami/minio tag: 2024.3.15-debian-12-r0 mode: standalone auth: rootUser: admin rootPassword: supersecret defaultBuckets: "loki, mimir, tempo" ingress: enabled: true hostname: "XXXXXXXXXXX" path: / tls: true annotations: cert-manager.io/cluster-issuer: letsencrypt-prod traefik.ingress.kubernetes.io/router.entrypoints: "websecure" traefik.ingress.kubernetes.io/router.tls: "true" `
Any suggestion? or Idea?
Thanks in advanced!