Open ghost opened 1 year ago
@aoxt323 I'm experiencing the same issue. Did you solve it?
@rehnerik I was in a hurry, so I recreated the environment and the same problem no longer occurred.
+1
I was able to fix the same issue by limiting the batch size for the opentelemetry collector. I had this issue with tenant with a high log volume and resulting large batch sizes while the other ones were handled just fine.
@L-Henke can you be more specific on what you changed? I started experiencing this as well.
I reduced the send_batch_size
of the opentelemetry collector batchprocessor from the default 8192 to something like 1000. I also set send_batch_max_size
to the same value.
We are see intermittent 503 returned by the gateway and unable to track them down. We did turn down the batch size from the collector and that seems to have helped out, but still getting random bursts of 503s for 2-3 minutes at a time. There isn't much feedback from the internal Loki logs to figure out why this 503 is being returned.
Hi Team, I am having trouble with Loki.
I want to use Loki with docker-compose on ubuntu 22.04. Everything is fine until Loki starts up, but when I receive logs from OpenTelemetry collector Loki returns a 503 error. I would like to know why the 503 error occurs and how to solve it.
docker-compose.yaml
``` yaml version: '3' services: loki: image: grafana/loki container_name: loki restart: always volumes: - ./loki/local-config.yaml:/etc/loki/local-config.yaml command: - "-config.file=/etc/loki/local-config.yaml" ports: - 3100:3100 networks: - my-network otelcollector: image: otel/opentelemetry-collector-contrib container_name: otelcollector restart: always ports: - 4317:4317 # gRPC port - 8887:8887 # HTTP port - 9200:9200 volumes: - ./otelcollector:/etc/otelcollector command: - "--config=/etc/otelcollector/otel-collector-config.yaml" networks: - my-network networks: my-network: driver: bridge ```loki/local-config.yaml
``` yaml auth_enabled: false server: http_listen_port: 3100 grpc_listen_port: 9096 common: instance_addr: 127.0.0.1 path_prefix: /tmp/loki storage: filesystem: chunks_directory: /tmp/loki/chunks rules_directory: /tmp/loki/rules replication_factor: 1 ring: kvstore: store: inmemory query_range: results_cache: cache: embedded_cache: enabled: true max_size_mb: 100 schema_config: configs: - from: 2020-10-24 store: boltdb-shipper object_store: filesystem schema: v11 index: prefix: index_ period: 24h ruler: alertmanager_url: http://localhost:9093 # By default, Loki will send anonymous, but uniquely-identifiable usage and configuration # analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/ # # Statistics help us better understand how Loki is used, and they show us performance # levels for most users. This helps us prioritize features and documentation. # For more information on what's sent, look at # https://github.com/grafana/loki/blob/main/pkg/usagestats/stats.go # Refer to the buildReport method to see what goes into a report. # # If you would like to disable reporting, uncomment the following lines: #analytics: # reporting_enabled: false ```otelcollector/otel-collector-config.yaml
``` yaml receivers: otlp: protocols: grpc: endpoint: otelcollector:4317 http: endpoint: otelcollector:8887 exporters: loki: endpoint: "http://loki:3100/loki/api/v1/push" processors: batch: service: pipelines: logs: receivers: [otlp] exporters: [loki] processors: [batch] ```Logs