fabric8io / gofabric8

CLI used when working with fabric8 running on Kubernetes or OpenShift
https://fabric8.io/
Apache License 2.0
147 stars 72 forks source link

fluentd Crashing #281

Open antifragileer opened 8 years ago

antifragileer commented 8 years ago

I deployed the latest fabric8 release. Things are much more stable.

That said, fluentd is still crashing after installing the management components on GKE. Also, for some reason there are two fluentd pods instead of the single one I saw before.

kubectl get pods
NAME                                        READY     STATUS             RESTARTS   AGE
configmapcontroller-2401828919-krwbe        1/1       Running            0          3h
elasticsearch-426176676-ojwdr               2/2       Running            0          3m
exposecontroller-1126175316-pcf1e           1/1       Running            0          3h
fabric8-1449330595-ixfeo                    2/2       Running            0          3h
fabric8-docker-registry-4185129253-3lgzw    1/1       Running            0          3h
fabric8-forge-1425626965-2hso7              1/1       Running            0          3h
fluentd-iwzbd                               0/1       CrashLoopBackOff   5          3m
fluentd-spk75                               0/1       CrashLoopBackOff   5          3m
gogs-3048373565-6o62k                       1/1       Running            0          3h
grafana-225606179-t58x0                     1/1       Running            0          3m
jenkins-2297698455-8550g                    1/1       Running            0          3h
kibana-3678948785-wqqei                     2/2       Running            0          3m
message-broker-1045034239-demg6             1/1       Running            0          10m
message-gateway-474760680-clygh             1/1       Running            0          10m
nexus-4216203902-yyam6                      1/1       Running            0          3h
node-exporter-0yxge                         1/1       Running            0          3m
node-exporter-8rgzj                         1/1       Running            0          3m
prometheus-3924772489-dkkk4                 2/2       Running            0          3m
prometheus-blackbox-expo-1529649959-du92x   1/1       Running            0          3m
zookeeper-3695684073-snpie                  1/1       Running            0          10m

Here is the dump from the fluentd pod(s)

kubectl describe pods fluentd-iwzbd
Name:           fluentd-iwzbd
Namespace:      forge-paas-ns
Node:           gke-forge-paas-default-pool-2f6c1994-iu1d/10.128.0.2
Start Time:     Mon, 07 Nov 2016 15:24:24 -0700
Labels:         group=io.fabric8.devops.apps
                project=fluentd
                provider=fabric8
                version=2.2.296
Status:         Running
IP:             10.0.1.11
Controllers:    DaemonSet/fluentd
Containers:
  fluentd:
    Container ID:       docker://05a70fc5fa2ad70a1f707b821cb6d41e70dda48bc647521efac62017deb2721f
    Image:              fabric8/fluentd-kubernetes:v1.19
    Image ID:           docker://sha256:c4d9030b93687aef0c1b5ed635ce7b009c50e77692769819af3d7367cf8cc05c
    Port:               24231/TCP
    Limits:
      cpu:      100m
    Requests:
      cpu:              100m
    State:              Waiting
      Reason:           CrashLoopBackOff
    Last State:         Terminated
      Reason:           ContainerCannotRun
      Exit Code:        128
      Started:          Mon, 07 Nov 2016 15:30:09 -0700
      Finished:         Mon, 07 Nov 2016 15:30:09 -0700
    Ready:              False
    Restart Count:      6
    Volume Mounts:
      /mnt/ephemeral/docker/containers from awsdocker (ro)
      /mnt/sda1/var/lib/docker/containers from minikubedocker (ro)
      /var/lib/docker/containers from defaultdocker (ro)
      /var/log from varlog (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from fluentd-token-qaddh (ro)
    Environment Variables:
      ELASTICSEARCH_HOST:       elasticsearch
      ELASTICSEARCH_PORT:       9200
Conditions:
  Type          Status
  Initialized   True 
  Ready         False 
  PodScheduled  True 
Volumes:
  varlog:
    Type:       HostPath (bare host directory volume)
    Path:       /var/log
  defaultdocker:
    Type:       HostPath (bare host directory volume)
    Path:       /var/lib/docker/containers
  awsdocker:
    Type:       HostPath (bare host directory volume)
    Path:       /mnt/ephemeral/docker/containers
  minikubedocker:
    Type:       HostPath (bare host directory volume)
    Path:       /mnt/sda1/var/lib/docker/containers
  fluentd-token-qaddh:
    Type:       Secret (a volume populated by a Secret)
    SecretName: fluentd-token-qaddh
QoS Class:      Burstable
Tolerations:    <none>
Events:
  FirstSeen     LastSeen        Count   From                                                    SubobjectPath                   Type            Reason          Message
  ---------     --------        -----   ----                                                    -------------                   --------        ------          -------
  7m            7m              1       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}     spec.containers{fluentd}        Normal          Pulling         pulling image "fabric8/fluentd-kubernetes:v1.19"
  6m            6m              1       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}     spec.containers{fluentd}        Normal          Pulled          Successfully pulled image "fabric8/fluentd-kubernetes:v1.19"
  6m            6m              1       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}     spec.containers{fluentd}        Normal          Created         Created container with docker id 4a82f3ac063b; Security:[seccomp=unconfined]
  6m            6m              1       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}     spec.containers{fluentd}        Warning         Failed          Failed to start container with docker id 4a82f3ac063b with error: Error response from daemon: mkdir /mnt/ephemeral: read-only file system
  6m            6m              1       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}     spec.containers{fluentd}        Normal          Created         Created container with docker id 25527ff66e48; Security:[seccomp=unconfined]
  6m            6m              1       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}     spec.containers{fluentd}        Warning         Failed          Failed to start container with docker id 25527ff66e48 with error: Error response from daemon: mkdir /mnt/sda1: read-only file system
  6m            6m              1       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}                                     Warning         FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "fluentd" with RunContainerError: "runContainer: Error response from daemon: mkdir /mnt/sda1: read-only file system"

  6m    6m      1       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}     spec.containers{fluentd}        Warning Failed          Failed to start container with docker id 9d0e71280a08 with error: Error response from daemon: mkdir /mnt/ephemeral: read-only file system
  6m    6m      1       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}     spec.containers{fluentd}        Normal  Created         Created container with docker id 9d0e71280a08; Security:[seccomp=unconfined]
  6m    6m      2       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}                                     Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "fluentd" with CrashLoopBackOff: "Back-off 20s restarting failed container=fluentd pod=fluentd-iwzbd_forge-paas-ns(efd6c1c8-a538-11e6-90b0-42010a800fd0)"

  6m    6m      1       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}     spec.containers{fluentd}        Normal  Created         Created container with docker id 1636d00bdba0; Security:[seccomp=unconfined]
  6m    6m      1       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}     spec.containers{fluentd}        Warning Failed          Failed to start container with docker id 1636d00bdba0 with error: Error response from daemon: mkdir /mnt/ephemeral: read-only file system
  5m    5m      2       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}                                     Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "fluentd" with CrashLoopBackOff: "Back-off 40s restarting failed container=fluentd pod=fluentd-iwzbd_forge-paas-ns(efd6c1c8-a538-11e6-90b0-42010a800fd0)"

  5m    5m      1       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}     spec.containers{fluentd}        Normal  Created         Created container with docker id e820f8c142db; Security:[seccomp=unconfined]
  5m    5m      1       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}     spec.containers{fluentd}        Warning Failed          Failed to start container with docker id e820f8c142db with error: Error response from daemon: mkdir /mnt/ephemeral: read-only file system
  5m    4m      7       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}                                     Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "fluentd" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=fluentd pod=fluentd-iwzbd_forge-paas-ns(efd6c1c8-a538-11e6-90b0-42010a800fd0)"

  4m    4m      1       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}     spec.containers{fluentd}        Normal  Created         Created container with docker id 38194fa0310a; Security:[seccomp=unconfined]
  4m    4m      1       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}     spec.containers{fluentd}        Warning Failed          Failed to start container with docker id 38194fa0310a with error: Error response from daemon: mkdir /mnt/ephemeral: read-only file system
  3m    1m      12      {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}                                     Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "fluentd" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=fluentd pod=fluentd-iwzbd_forge-paas-ns(efd6c1c8-a538-11e6-90b0-42010a800fd0)"

  6m    1m      6       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}     spec.containers{fluentd}        Normal  Pulled          Container image "fabric8/fluentd-kubernetes:v1.19" already present on machine
  6m    1m      6       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}                                     Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "fluentd" with RunContainerError: "runContainer: Error response from daemon: mkdir /mnt/ephemeral: read-only file system"

  1m    1m      1       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}     spec.containers{fluentd}        Warning Failed          Failed to start container with docker id 05a70fc5fa2a with error: Error response from daemon: mkdir /mnt/ephemeral: read-only file system
  1m    1m      1       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}     spec.containers{fluentd}        Normal  Created         Created container with docker id 05a70fc5fa2a; Security:[seccomp=unconfined]
  6m    11s     29      {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}     spec.containers{fluentd}        Warning BackOff         Back-off restarting failed docker container
  1m    11s     6       {kubelet gke-forge-paas-default-pool-2f6c1994-iu1d}                                     Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "fluentd" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=fluentd pod=fluentd-iwzbd_forge-paas-ns(efd6c1c8-a538-11e6-90b0-42010a800fd0)"

And from the second pod

kubectl describe pods fluentd-spk75
Name:           fluentd-spk75
Namespace:      forge-paas-ns
Node:           gke-forge-paas-default-pool-2f6c1994-6ho0/10.128.0.3
Start Time:     Mon, 07 Nov 2016 15:24:24 -0700
Labels:         group=io.fabric8.devops.apps
                project=fluentd
                provider=fabric8
                version=2.2.296
Status:         Running
IP:             10.0.0.19
Controllers:    DaemonSet/fluentd
Containers:
  fluentd:
    Container ID:       docker://63b6743a1e3f9b08d8b1e5ddd68eb7098951ff7817ff4434555a6188ae7e6f73
    Image:              fabric8/fluentd-kubernetes:v1.19
    Image ID:           docker://sha256:c4d9030b93687aef0c1b5ed635ce7b009c50e77692769819af3d7367cf8cc05c
    Port:               24231/TCP
    Limits:
      cpu:      100m
    Requests:
      cpu:              100m
    State:              Waiting
      Reason:           CrashLoopBackOff
    Last State:         Terminated
      Reason:           ContainerCannotRun
      Exit Code:        128
      Started:          Mon, 07 Nov 2016 15:30:15 -0700
      Finished:         Mon, 07 Nov 2016 15:30:15 -0700
    Ready:              False
    Restart Count:      6
    Volume Mounts:
      /mnt/ephemeral/docker/containers from awsdocker (ro)
      /mnt/sda1/var/lib/docker/containers from minikubedocker (ro)
      /var/lib/docker/containers from defaultdocker (ro)
      /var/log from varlog (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from fluentd-token-qaddh (ro)
    Environment Variables:
      ELASTICSEARCH_HOST:       elasticsearch
      ELASTICSEARCH_PORT:       9200
Conditions:
  Type          Status
  Initialized   True 
  Ready         False 
  PodScheduled  True 
Volumes:
  varlog:
    Type:       HostPath (bare host directory volume)
    Path:       /var/log
  defaultdocker:
    Type:       HostPath (bare host directory volume)
    Path:       /var/lib/docker/containers
  awsdocker:
    Type:       HostPath (bare host directory volume)
    Path:       /mnt/ephemeral/docker/containers
  minikubedocker:
    Type:       HostPath (bare host directory volume)
    Path:       /mnt/sda1/var/lib/docker/containers
  fluentd-token-qaddh:
    Type:       Secret (a volume populated by a Secret)
    SecretName: fluentd-token-qaddh
QoS Class:      Burstable
Tolerations:    <none>
Events:
  FirstSeen     LastSeen        Count   From                                                    SubobjectPath                   Type            Reason          Message
  ---------     --------        -----   ----                                                    -------------                   --------        ------          -------
  8m            8m              1       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}     spec.containers{fluentd}        Normal          Pulling         pulling image "fabric8/fluentd-kubernetes:v1.19"
  8m            8m              1       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}     spec.containers{fluentd}        Normal          Pulled          Successfully pulled image "fabric8/fluentd-kubernetes:v1.19"
  8m            8m              1       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}     spec.containers{fluentd}        Normal          Created         Created container with docker id dce2709cd6b3; Security:[seccomp=unconfined]
  8m            8m              1       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}     spec.containers{fluentd}        Warning         Failed          Failed to start container with docker id dce2709cd6b3 with error: Error response from daemon: mkdir /mnt/ephemeral: read-only file system
  8m            8m              1       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}     spec.containers{fluentd}        Normal          Created         Created container with docker id 912c7d020859; Security:[seccomp=unconfined]
  8m            8m              1       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}     spec.containers{fluentd}        Warning         Failed          Failed to start container with docker id 912c7d020859 with error: Error response from daemon: mkdir /mnt/ephemeral: read-only file system
  8m            8m              1       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}     spec.containers{fluentd}        Normal          Created         Created container with docker id 6b2b738ce354; Security:[seccomp=unconfined]
  8m            8m              1       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}     spec.containers{fluentd}        Warning         Failed          Failed to start container with docker id 6b2b738ce354 with error: Error response from daemon: mkdir /mnt/ephemeral: read-only file system
  8m            8m              3       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}                                     Warning         FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "fluentd" with CrashLoopBackOff: "Back-off 20s restarting failed container=fluentd pod=fluentd-spk75_forge-paas-ns(efd6a1ed-a538-11e6-90b0-42010a800fd0)"

  7m    7m      1       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}     spec.containers{fluentd}        Warning Failed          Failed to start container with docker id 2e508583849a with error: Error response from daemon: mkdir /mnt/ephemeral: read-only file system
  7m    7m      1       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}     spec.containers{fluentd}        Normal  Created         Created container with docker id 2e508583849a; Security:[seccomp=unconfined]
  7m    7m      3       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}                                     Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "fluentd" with CrashLoopBackOff: "Back-off 40s restarting failed container=fluentd pod=fluentd-spk75_forge-paas-ns(efd6a1ed-a538-11e6-90b0-42010a800fd0)"

  7m    7m      1       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}     spec.containers{fluentd}        Normal  Created         Created container with docker id 5d51a9b0b79d; Security:[seccomp=unconfined]
  7m    7m      1       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}     spec.containers{fluentd}        Warning Failed          Failed to start container with docker id 5d51a9b0b79d with error: Error response from daemon: mkdir /mnt/ephemeral: read-only file system
  6m    5m      6       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}                                     Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "fluentd" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=fluentd pod=fluentd-spk75_forge-paas-ns(efd6a1ed-a538-11e6-90b0-42010a800fd0)"

  5m    5m      1       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}     spec.containers{fluentd}        Normal  Created         Created container with docker id fc1fdbb29b05; Security:[seccomp=unconfined]
  5m    5m      1       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}     spec.containers{fluentd}        Warning Failed          Failed to start container with docker id fc1fdbb29b05 with error: Error response from daemon: mkdir /mnt/ephemeral: read-only file system
  5m    3m      12      {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}                                     Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "fluentd" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=fluentd pod=fluentd-spk75_forge-paas-ns(efd6a1ed-a538-11e6-90b0-42010a800fd0)"

  2m    2m      1       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}     spec.containers{fluentd}        Warning Failed          Failed to start container with docker id 63b6743a1e3f with error: Error response from daemon: mkdir /mnt/ephemeral: read-only file system
  8m    2m      6       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}     spec.containers{fluentd}        Normal  Pulled          Container image "fabric8/fluentd-kubernetes:v1.19" already present on machine
  8m    2m      7       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}                                     Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "fluentd" with RunContainerError: "runContainer: Error response from daemon: mkdir /mnt/ephemeral: read-only file system"

  2m    2m      1       {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}     spec.containers{fluentd}        Normal  Created         Created container with docker id 63b6743a1e3f; Security:[seccomp=unconfined]
  8m    0s      38      {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}     spec.containers{fluentd}        Warning BackOff         Back-off restarting failed docker container
  2m    0s      14      {kubelet gke-forge-paas-default-pool-2f6c1994-6ho0}                                     Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "fluentd" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=fluentd pod=fluentd-spk75_forge-paas-ns(efd6a1ed-a538-11e6-90b0-42010a800fd0)"
rawlingsj commented 8 years ago

I've tracked this down and the error from above

Error response from daemon: mkdir /mnt/ephemeral: read-only file system

Is caused by a volume mount on the fluentd daemonset.

In fact there were two volume mounts that I needed to remove, the name of each suggest they're not needed on GCE anyhow. The workaround until we get a proper fix is to edit the daemonset:

kubectl edit ds fluentd 

delete lines:

https://github.com/fabric8io/fabric8-devops/blob/master/fluentd/src/main/fabric8/daemonset.yml#L28-L33 and https://github.com/fabric8io/fabric8-devops/blob/master/fluentd/src/main/fabric8/daemonset.yml#L47-L52

@jimmidyson do you know of a proper fix?

antifragileer commented 8 years ago

So I finally got to this. I installed management to an environment and ran the daemon set edit. But neither of those two lines are in the daemon set for fluentd installed into that namespace.

kubectl -n dev-testing edit ds fluentd
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  annotations:
    fabric8.io/iconUrl: https://cdn.rawgit.com/fabric8io/fabric8-devops/master/fluentd/src/main/fabric8/icon.png
  creationTimestamp: 2016-11-21T20:26:42Z
  generation: 2
  labels:
    group: io.fabric8.devops.apps
    project: fluentd
    provider: fabric8
    version: 2.2.297
  name: fluentd
  namespace: dev-testing
  resourceVersion: "851511"
  selfLink: /apis/extensions/v1beta1/namespaces/dev-testing/daemonsets/fluentd
  uid: d041686b-b028-11e6-a600-42010af0012a
spec:
  selector:
    matchLabels:
      group: io.fabric8.devops.apps
      project: fluentd
      provider: fabric8
      version: 2.2.297
  template:
    metadata:
      annotations:
        prometheus.io/scrape: "true"
      creationTimestamp: null
      labels:
        group: io.fabric8.devops.apps
        project: fluentd
        provider: fabric8
        version: 2.2.297
    spec:
      containers:
      - env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch
        - name: ELASTICSEARCH_PORT
          value: "9200"
        image: fabric8/fluentd-kubernetes:v1.19
        imagePullPolicy: IfNotPresent
        name: fluentd
        ports:
        - containerPort: 24231
          name: scrape
          protocol: TCP
        resources:
          limits:
            cpu: 100m
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        volumeMounts:
        - mountPath: /var/log
          name: varlog
        - mountPath: /var/lib/docker/containers
          name: defaultdocker
          readOnly: true
        - mountPath: /mnt/ephemeral/docker/containers
          name: awsdocker
          readOnly: true
        - mountPath: /mnt/sda1/var/lib/docker/containers
          name: minikubedocker
          readOnly: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      securityContext: {}
      serviceAccount: fluentd
      serviceAccountName: fluentd
      terminationGracePeriodSeconds: 30
      volumes:
      - hostPath:
          path: /var/log
        name: varlog
      - hostPath:
          path: /var/lib/docker/containers
        name: defaultdocker
      - hostPath:
          path: /mnt/ephemeral/docker/containers
        name: awsdocker
      - hostPath:
          path: /mnt/sda1/var/lib/docker/containers
        name: minikubedocker
status:
  currentNumberScheduled: 2
  desiredNumberScheduled: 2
  numberMisscheduled: 0

And describing it...

kubectl -n dev-testing describe ds fluentd
Name:           fluentd
Image(s):       fabric8/fluentd-kubernetes:v1.19
Selector:       group=io.fabric8.devops.apps,project=fluentd,provider=fabric8,version=2.2.297
Node-Selector:  <none>
Labels:         group=io.fabric8.devops.apps
                project=fluentd
                provider=fabric8
                version=2.2.297
Desired Number of Nodes Scheduled: 2
Current Number of Nodes Scheduled: 2
Number of Nodes Misscheduled: 0
Pods Status:    2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Events:
  FirstSeen     LastSeen        Count   From            SubobjectPath   Type            Reason                  Message
  ---------     --------        -----   ----            -------------   --------        ------                  -------
  6m            6m              1       {daemon-set }                   Normal          SuccessfulCreate        Created pod: fluentd-wknvj
  6m            6m              1       {daemon-set }                   Normal          SuccessfulCreate        Created pod: fluentd-eskok

And the pods...

kubectl -n dev-testing get pods
NAME                                        READY     STATUS              RESTARTS   AGE
elasticsearch-2415127616-hv5m8              2/2       Running             0          11m
fluentd-eskok                               0/1       RunContainerError   7          11m
fluentd-wknvj                               0/1       CrashLoopBackOff    7          11m
grafana-3902895550-4zgi7                    1/1       Running             0          3h
kibana-3264104781-5raun                     2/2       Running             0          2h
message-broker-1045034239-k0wim             1/1       Running             0          3h
message-gateway-474760680-yct21             1/1       Running             0          3h
ms-dev-404638559-1bf51                      1/1       Running             0          3h
node-exporter-0gyqr                         1/1       Running             0          3h
node-exporter-r52t7                         1/1       Running             0          3h
prometheus-999244325-tegq9                  2/2       Running             0          3h
prometheus-blackbox-expo-1820759746-4xf1t   1/1       Running             0          3h
zookeeper-3695684073-h7c1v                  1/1       Running             0          3h
antifragileer commented 8 years ago

So are these the offending lines?

        - mountPath: /mnt/ephemeral/docker/containers
          name: awsdocker
          readOnly: true
        - mountPath: /mnt/sda1/var/lib/docker/containers
          name: minikubedocker
          readOnly: true

And..

      volumes:
      - hostPath:
          path: /var/log
        name: varlog
      - hostPath:
          path: /var/lib/docker/containers
        name: defaultdocker
      - hostPath:
          path: /mnt/ephemeral/docker/containers
        name: awsdocker
      - hostPath:
          path: /mnt/sda1/var/lib/docker/containers
        name: minikubedocker

The google kube-system namespace fluentd pods have this specified for the volumeMounts:

    volumeMounts:
    - mountPath: /var/log
      name: varlog
    - mountPath: /var/lib/docker/containers
      name: varlibdockercontainers
      readOnly: true
    - mountPath: /var/log/journal
      name: journaldir
    - mountPath: /host/lib
      name: libsystemddir

And also...

  volumes:
  - hostPath:
      path: /var/log
    name: varlog
  - hostPath:
      path: /var/lib/docker/containers
    name: varlibdockercontainers
  - hostPath:
      path: /var/log/journal
    name: journaldir
  - hostPath:
      path: /usr/lib64
    name: libsystemddir
antifragileer commented 8 years ago

Ok, so I fixed the issue. I edited the daemon set as recommended. But I needed to change the volumes and volume mounts as follows in the daemon set:

        volumeMounts:
        - mountPath: /var/log
          name: varlog
        - mountPath: /var/lib/docker/containers
          name: defaultdocker
          readOnly: true
        - mountPath: /var/log/journal
          name: journaldir
        - mountPath: /host/lib
          name: libsystemddir

And...

      volumes:
      - hostPath:
          path: /var/log
        name: varlog
      - hostPath:
          path: /var/lib/docker/containers
        name: defaultdocker
      - hostPath:
          path: /var/log/journal
        name: journaldir
      - hostPath:
          path: /usr/lib64
        name: libsystemddir

After changing that in the daemon set, the pods come up after I delete them and they auto create again.