Open kunalmehta-eve opened 1 month ago
@slim-bean can you please suggest
Please provide necessary configuration to fix this issue
Questions have a better chance of being answered if you ask them on the community forums.
limits_config:
max_line_size: 0
We are seeing following error in loki logs
level=error ts=2024-05-16T08:35:43.267554629Z caller=manager.go:49 component=distributor path=write msg="write operation failed" details="Max entry size '262144' bytes exceeded for stream '{app=\"generate-preview-5jp7j\", container=\"main\", filename=\"/var/log/pods/argo-workflows_generate-preview-5jp7j_31f70b67-5db3-4933-ab9b-0d4513b46316/main/0.log\", job=\"argo-workflows/generate-preview-5jp7j\", namespace=\"argo-workflows\", node_name=\"aks-defaultgreen-11165910-vmss00008s\", pod=\"generate-preview-5jp7j\", stream=\"stderr\"}' while adding an entry with length '583758' bytes" org_id=fake
To Reproduce Steps to reproduce the behavior:
Expected behavior Please provide necessary configuration to fix this issue
Environment:
Screenshots, Promtail config, or terminal output loki: auth_enabled: false analytics: reporting_enabled: false storage: type: azure azure: accountName: ${loki_azurerm_storage_account_name} bucketNames: chunks: ${loki_chunks_azurerm_storage_container_name} ruler: ${loki_ruler_azurerm_storage_container_name} admin: ${loki_admin_azurerm_storage_container_name} ingester: max_chunk_age: 24h
ingester_client: grpc_client_config: grpc_keepalive_time: 30s # Adjust keepalive settings grpc_keepalive_timeout: 20s # Adjust keepalive settings
structuredConfig: query_range: parallelise_shardable_queries: false server: http_server_write_timeout: 10m
limits_config: allow_structured_metadata: false max_concurrent_tail_requests: 100 discover_service_name: [ ] schemaConfig: configs:
lokiCanary: tolerations:
monitoring: enabled: true selfMonitoring: enabled: true grafanaAgent: installOperator: true tolerations:
write: nodeSelector: stack: monitoring tolerations:
read: nodeSelector: stack: monitoring tolerations:
backend: nodeSelector: stack: monitoring tolerations:
chunksCache: nodeSelector: stack: monitoring tolerations:
resultsCache: nodeSelector: stack: monitoring tolerations: