grafana / loki

Like Prometheus, but for logs.
https://grafana.com/loki
GNU Affero General Public License v3.0
23.57k stars 3.41k forks source link

Is there any limits about the version of docker for promtail? I failed to deploy it on 18.3.1 version of docker, and there's no any error logs. #1971

Closed TimeBye closed 4 years ago

TimeBye commented 4 years ago

Describe the bug Is there any limits about the version of docker for promtail? I failed to deploy it on 18.3.1 version of docker, and there's no any error logs.

To Reproduce Steps to reproduce the behavior:

  1. Started Loki (SHA or version)
    helm repo add loki https://grafana.github.io/loki/charts
    helm repo update
    helm upgrade --install loki loki/loki --namespace loki
  2. Started Promtail (SHA or version) to tail '...'
    helm upgrade --install promtail loki/promtail --set "loki.serviceName=loki" --namespace loki

Expected behavior Normal operation for promtail.

Environment:

Screenshots, Promtail config, or terminal output

crezy8 commented 4 years ago

Same issue to me, I just get start to promtail,kubernetes version 1.13.2, docker version 18.6.1. In my case, all pod turn to CrashLoopBackOff status, kubectl describe pod shows the pod was Terminated by Completed reason, and the Exit Code was 0. The same log message, no error, I just can't figure it out.

I try to chanege pod "command" to sleep 3600 started the pod, execute /usr/bin/promtail -config.file=/etc/promtail/promtail.yaml -client.url=http://loki:3100/loki/api/v1/push manually, it works, process seems started and communicate with loki server

For sure, update docker version to 18.9.x would help.

TimeBye commented 4 years ago

I just upgraded Docker to 18.9.6 on all nodes, and promtail works fine.

[root@stagingx ~]# kubectl get po -n loki -o wide
NAME             READY   STATUS    RESTARTS   AGE     IP              NODE        NOMINATED NODE   READINESS GATES
loki-0           1/1     Running   0          2m18s   10.233.66.156   staging04   <none>           <none>
promtail-7lcmh   1/1     Running   0          2m18s   10.233.71.65    staging07   <none>           <none>
promtail-f74ct   1/1     Running   0          2m27s   10.233.70.198   staging06   <none>           <none>
promtail-jmmpg   1/1     Running   0          2m27s   10.233.69.148   staging05   <none>           <none>
promtail-ltvk5   1/1     Running   0          2m25s   10.233.67.185   staging01   <none>           <none>
promtail-qg4x4   1/1     Running   0          2m22s   10.233.65.161   staging03   <none>           <none>
promtail-rnpfx   1/1     Running   0          2m26s   10.233.64.139   stagingx    <none>           <none>
promtail-wc4mt   1/1     Running   0          2m28s   10.233.68.234   staging02   <none>           <none>
promtail-xgt4z   1/1     Running   0          2m18s   10.233.66.157   staging04   <none>           <none>
vs102000 commented 4 years ago

Same issue here. changing the Deamonset command to sleep 99999 | sleep 999999999 | /usr/bin/promtail .... fixes the topic.

cyriltovena commented 4 years ago

Hey folks !

This is a known issue with 1.4 and 1.4.1 for some docker deamon please use master until we release 1.5

Thanks