Closed satnam6502 closed 8 years ago
Ok so I finally tried --validate=false
(it didnt work yesterday) and it loaded the manifest -
╰─○ kubectl create --validate=false -f fluentd-daemon.yaml
─○ kg daemonset
NAME CONTAINER(S) IMAGE(S) SELECTOR NODE-SELECTOR
fluentd-elasticsearch fluentd-elasticsearch fabric8/fluentd-kubernetes:1.3 heritage=helm,k8s-app=fluentd-elasticsearch,kubernetes.io/cluster-service=true,version=v1 <none>
That's great! Now does logging work for you?
Yes so I was able to get it to log to elastic search and view the logs with kibana in a vagrant cluster
I am creating a helm chart that encompasses this work with a detailed readme on how to get it working
That sounds awesome,
@jimmidyson I just tried out 0.15 of your fabric8io/fluent-plugin-kubernetes_metadata_filter (by rebuilding the latest head of fabric8io/docker-fluentd-kubernetes (v1.9 had v0.13 and didn't bring in labels)) using DaemonSets and I'm successfully seeing pod labels in Kibana! Thanks so much for your hard work on this.
@binarybana That's great to hear - thanks for the feedback.
Yes the deis team is using the fluentd kubernetes plugin as part of our logging stack and it works really well.
:grinning:
Btw if you need an example of deploying fluentd onto kubernetes with their plugin you can look here - https://github.com/deis/fluentd
+1 to @jimmidyson 's great work. :+1:
cc @piosz @fgrzadkowski
@igorpeshansky
Would it be possible to pass that metadata to the container? For example if you are using the docker engine you could use the label mechanism to pass this metadata, as it k8's would have this information at the time it schedules the new containers. This would mean that I don't have to waste additional cpu cycles to get that information.
Most probably it is related: https://github.com/kubernetes/kubernetes/pull/33584
has anyone got this working on minikube?
cc @crassirostris
@nehayward Could you please create an issue against https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter with the detailed description of your problem?
I'm closing this, in favor of https://github.com/kubernetes/kubernetes/issues/42718, because working on the logging solution per se is going to be out of the scope of core Kubernetes.
Continuing discussion from #3764 @a-robinson @mr-salty CC: @dchen1107 @roberthbailey