SumoLogic / sumologic-kubernetes-collection

Sumo Logic collection solution for Kubernetes
Apache License 2.0
147 stars 183 forks source link

Silently dropped logs with out of box config #1255

Closed laupow closed 3 years ago

laupow commented 3 years ago

Describe the bug Higher volume log environments need better configuration guardrails to ensure logs aren't dropped silently.

Recently 2 different engineers have expected logs and found none in our production environment.

One instance was a long running service that had intermittent message missing (screenshot attached) Another instance was a new Deployment that did not get logs captured (logs were verified with kubectl logs <pod>. Screenshot in ticket)

MissingLogsInSumo

Logs Logs available in ticket

Command used to install/upgrade Collection

helm upgrade -i -f sumologic-collector/base-eks-values.yaml \
  -f sumologic-collector/${ENVIRONMENT}-values.yaml \
  --namespace $NAMESPACE \
  --kube-context ${KUBECTL_CONTEXT} \
  $RELEASE_NAME \
  --version v1.3.1 \
  --set sumologic.accessKey=$SUMOLOGIC_ACCESS_KEY \
  sumologic/sumologic 

with Helm 2

Configuration

fluentd:
  logs:
    autoscaling:
      enabled: true
    containers:
      sourceCategory: '%{pod_name}'
      sourceCategoryPrefix: production/
      sourceCategoryReplaceDash: '-'
      sourceName: '%{namespace}.%{pod}.%{container}'
    default:
      sourceCategoryPrefix: production/
    kubelet:
      sourceCategoryPrefix: production/
    statefulset:
      nodeSelector:
        company.com/nodegroup-name: general-public
      tolerations:
      - effect: NoSchedule
        key: dedicated
        operator: Equal
        value: general-public
    systemd:
      sourceCategoryPrefix: production/
  persistence:
    enabled: true
prometheus-operator:
  enabled: false
sumologic:
  accessId: <removed>
  accessKey: <removed>
  clusterName: us-east-1-eks-production
  metrics:
    enabled: false

To Reproduce I have not been able to reproduce the issue. On Dec 15 we manually changed the HPA minimum from 3 to 7 nobody has reported issues since then, but 🤷

The issue occurs in our production environment so there is somewhat of a disincentive to reproduce it :)

Expected behavior Provide a clear signal (pod crash, log message) when there is a capacity issue or other case that might cause logs to drop.

Environment (please complete the following information):

Anything else do we need to know

fluend pod metrics, HPA minimum adjusted Dec 15

FluendPodMetrics7days

Sumo collector volume

Screen Shot 2020-12-17 at 3 41 06 PM
laupow commented 3 years ago

Actually, a bit of reproducibility. Scaling fluentd down to one pod triggered these logs in staging.

Screen Shot 2020-12-17 at 5 33 15 PM

This makes sense. I think what I'm asking for is how to guarantee we don't drop logs with the fluent-bit/fluend aggregator architecture. I'm not 100% convinced the HPA guarantees no missing messages.

perk-sumo commented 3 years ago

Hi @laupow - thank you for reporting! We are taking a look at this.

sumo-drosiek commented 3 years ago

@laupow Sorry for late response

By using graceful shutdown period, liveness and readiness we ensure that logs are coming without misses.

We observed that in fluent-bit below 1.6.10 we had been loosing logs due to invalid rotation handling. We recommend to use latest version of our collecion which uses fixed image

In addition we are going to improve load balancing and HPA by disabling keepalive for fluent-bit #1495

laupow commented 3 years ago

Awesome, thanks for the update. Looking forward to v2.1 👍

sumo-drosiek commented 3 years ago

@laupow 2.1.0 is released :tada: Please check how it works for you :)

perk-sumo commented 3 years ago

Hi @laupow let me close this issue. Please let me know if the problem still exists.