Open rwillard opened 8 years ago
@rwillard i've been working on a utility to achieve centralized logging via journal. Just redirect every container message to it (via docker --log-driver
or rkt through systemd-run
or --link-journal) and use one forwarder unit per machine:
[Unit]
Description=journald forwarder
After=docker.service
Requires=docker.service
[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill journald-forwarder
ExecStartPre=-/usr/bin/docker rm journald-forwarder
ExecStartPre=/usr/bin/docker pull quay.io/glerchundi/journald-forwarder-loggly
ExecStart=/usr/bin/docker run \
--name journald-forwarder \
-v /lib64:/lib64:ro \
-v /var/log/journal:/var/log/journal:ro \
-v /usr/share/ca-certificates:/etc/ssl/certs:ro \
quay.io/glerchundi/journald-forwarder-loggly \
--loggly-token abcdefgh-ijkl-mnop-qrst-uvwxyzabcdef
[Install]
WantedBy=multi-user.target
[X-Fleet]
Global=true
You can take a look to the source code here. It's easy to add a new forwarder like logstash (with libeat).
WDYT?
I recommend using the fluentd and elasticsearch to aggregate your logs. Here's the upstream example
You can schedule fluentd as a daemonset that mounts the /var/log/containers
host directory on each node. We are currently working on a kube-was distribution for this approach that includes Kibana for visualizing the elastic search data.
@colhom what about system-level logs? Or the ones generated by etcd machines, do you have any plans on using kubelet there?
Great! I'm looking forward to more functionality and configuration with kube-aws.
so, whats the status? any news on this? no logging out of the box in kube-aws? what's the recommended workaround? any ETAs on "kube-aws distribution for this approach that includes Kibana for visualizing the elastic search data"? (@colhom)
+1
+1
I just got this working on my environment. Are you interested in a PR?
"this" being: fluentd-elasticsearch, elasticsearch and kibana
Great! I would very much be interested :)
On May 26 2016, at 3:02 pm, Oscar Morante <notifications@github.com> wrote:
I just got this working on my environment. Are you interested in a PR?
—
You are receiving this because you commented.
Reply to this email directly or view it on GitHub
@spacepluk, any progress on that PR? I've been trying to get the fluentd-elasticsearch thing going for a while now but all my fluentd containers keep just "completing" without any output what so ever. I'm thinking of just switching the docker daemon to use the journald log driver but I'm not sure if kubectl logs
still works then. docker logs
does still work if I use journald according to the docker docs.
I manually updated a few things in the files generated by kube-aws to get this working.
First, since the upstream Fluentd container relies on the container logs being available at the /var/log/container
of each worker node, you need to change the kubelet config as follows.
In userdata/cloud-config-worker
:
#cloud-config
coreos:
# ...
units:
# ...
- name: kubelet.service
enable: true
command: start
content: |
[Unit]
# ...
[Service]
# ...
Environment="RKT_OPTS=--volume dns,kind=host,source=/etc/resolv.conf --mount volume=dns,target=/etc/resolv.conf --volume var-log,kind=host,source=/var/log --mount volume=var-log,target=/var/log"
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
# ...
Do the same thing in the kubelet config in userdata/cloud-config-controller
. Then, also change this file in order to create pods for ES/Kibana and a Fluentd daemonset (will run one pod per worker):
# ...
write_files:
- path: /opt/bin/install-kube-system
permissions: 0700
owner: root:root
content: |
#!/bin/bash -e
# ...
# Custom replication controllers
for manifest in {kibana-logging,elasticsearch-logging}-rc.json;do
/usr/bin/curl -H "Content-Type: application/json" -XPOST \
-d @"/srv/kubernetes/manifests/$manifest" \
"http://127.0.0.1:8080/api/v1/namespaces/kube-system/replicationcontrollers"
done
# Custom services
for manifest in {elasticsearch-logging,kibana-logging}-svc.json;do
/usr/bin/curl -H "Content-Type: application/json" -XPOST \
-d @"/srv/kubernetes/manifests/$manifest" \
"http://127.0.0.1:8080/api/v1/namespaces/kube-system/services"
done
# Custom daemon sets
/usr/bin/curl -H "Content-Type: application/json" -XPOST \
-d @"/srv/kubernetes/manifests/fluentd-cloud-logging-ds.json" \
"http://127.0.0.1:8080/apis/extensions/v1beta1/namespaces/kube-system/daemonsets"
# ...
- path: /srv/kubernetes/manifests/elasticsearch-logging-rc.json
content: |
{
"apiVersion": "v1",
"kind": "ReplicationController",
"metadata": {
"name": "elasticsearch-logging-v1",
"namespace": "kube-system",
"labels": {
"k8s-app": "elasticsearch-logging",
"version": "v1",
"kubernetes.io/cluster-service": "true"
}
},
"spec": {
"replicas": 2,
"selector": {
"k8s-app": "elasticsearch-logging",
"version": "v1"
},
"template": {
"metadata": {
"labels": {
"k8s-app": "elasticsearch-logging",
"version": "v1",
"kubernetes.io/cluster-service": "true"
}
},
"spec": {
"containers": [
{
"image": "gcr.io/google_containers/elasticsearch:1.8",
"name": "elasticsearch-logging",
"resources": {
"limits": {
"cpu": "100m"
},
"requests": {
"cpu": "100m"
}
},
"ports": [
{
"containerPort": 9200,
"name": "db",
"protocol": "TCP"
},
{
"containerPort": 9300,
"name": "transport",
"protocol": "TCP"
}
],
"volumeMounts": [
{
"name": "es-persistent-storage",
"mountPath": "/data"
}
]
}
],
"volumes": [
{
"name": "es-persistent-storage",
"emptyDir": {
}
}
]
}
}
}
}
- path: /srv/kubernetes/manifests/kibana-logging-rc.json
content: |
{
"apiVersion": "v1",
"kind": "ReplicationController",
"metadata": {
"name": "kibana-logging-v1",
"namespace": "kube-system",
"labels": {
"k8s-app": "kibana-logging",
"version": "v1",
"kubernetes.io/cluster-service": "true"
}
},
"spec": {
"replicas": 1,
"selector": {
"k8s-app": "kibana-logging",
"version": "v1"
},
"template": {
"metadata": {
"labels": {
"k8s-app": "kibana-logging",
"version": "v1",
"kubernetes.io/cluster-service": "true"
}
},
"spec": {
"containers": [
{
"name": "kibana-logging",
"image": "gcr.io/google_containers/kibana:1.3",
"resources": {
"limits": {
"cpu": "100m"
},
"requests": {
"cpu": "100m"
}
},
"env": [
{
"name": "ELASTICSEARCH_URL",
"value": "http://elasticsearch-logging:9200"
}
],
"ports": [
{
"containerPort": 5601,
"name": "ui",
"protocol": "TCP"
}
]
}
]
}
}
}
}
- path: /srv/kubernetes/manifests/elasticsearch-logging-svc.json
content: |
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "elasticsearch-logging",
"namespace": "kube-system",
"labels": {
"k8s-app": "elasticsearch-logging",
"kubernetes.io/cluster-service": "true",
"kubernetes.io/name": "Elasticsearch"
}
},
"spec": {
"ports": [
{
"port": 9200,
"protocol": "TCP",
"targetPort": "db"
}
],
"selector": {
"k8s-app": "elasticsearch-logging"
}
}
}
- path: /srv/kubernetes/manifests/kibana-logging-svc.json
content: |
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "kibana-logging",
"namespace": "kube-system",
"labels": {
"k8s-app": "kibana-logging",
"kubernetes.io/cluster-service": "true",
"kubernetes.io/name": "Kibana"
}
},
"spec": {
"ports": [
{
"port": 5601,
"protocol": "TCP",
"targetPort": "ui"
}
],
"selector": {
"k8s-app": "kibana-logging"
}
}
}
- path: /srv/kubernetes/manifests/fluentd-cloud-logging-ds.json
content: |
{
"apiVersion": "extensions/v1beta1",
"kind": "DaemonSet",
"metadata": {
"name": "fluent-elasticsearch",
"namespace": "kube-system",
"labels": {
"k8s-app": "fluentd-logging"
}
},
"spec": {
"template": {
"metadata": {
"name": "fluentd-elasticsearch",
"namespace": "kube-system",
"labels": {
"k8s-app": "fluentd-logging"
}
},
"spec": {
"containers": [
{
"name": "fluentd-elasticsearch",
"image": "gcr.io/google_containers/fluentd-elasticsearch:1.15",
"resources": {
"limits": {
"memory": "200Mi"
},
"requests": {
"cpu": "100m",
"memory": "200Mi"
}
},
"volumeMounts": [
{
"name": "varlog",
"mountPath": "/var/log"
},
{
"name": "varlibdockercontainers",
"mountPath": "/var/lib/docker/containers",
"readOnly": true
}
]
}
],
"terminationGracePeriodSeconds": 30,
"volumes": [
{
"name": "varlog",
"hostPath": {
"path": "/var/log"
}
},
{
"name": "varlibdockercontainers",
"hostPath": {
"path": "/var/lib/docker/containers"
}
}
]
}
}
}
}
Thanks, this is a good base to start from.
Thanks!
Thank you!
Is there a way to modify an existing cluster so that fluentd is able to get the containers' logs?
why is that not defined by default? any progress on that?
@AlmogBaku we have not reached a conclusion on whether or not kube-aws should ship with log aggregation facilities. While this is a feature many users have asked for, we have yet to settle on a solution that we're comfortable providing as a managed feature of kube-aws clusters going forward.
\cc @aaronlevy
But can we agree that the nodes should maintain the symlinks on /var/logs/containers/ so if someone is willing to deploy an aggregation solution he'll be able to? :)
On Thursday, 1 September 2016, colhom notifications@github.com wrote:
@AlmogBaku https://github.com/AlmogBaku we have not reached a conclusion on whether or not kube-aws should ship with log aggregation facilities. While this is a feature many users have asked for, we have yet to settle on a solution that we're comfortable providing as a managed feature of kube-aws clusters going forward.
\cc @aaronlevy https://github.com/aaronlevy
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/coreos/coreos-kubernetes/issues/320#issuecomment-244147777, or mute the thread https://github.com/notifications/unsubscribe-auth/AAGCpt-TqJ27rR_Bzu6kN3ZV_k88lZNyks5qlwgqgaJpZM4HyWnv .
www.rimoto.com http://www.rimoto.net/
Almog Baku
CTO & Cofounder Mobile: +972.50.2288.744 Social: \ http://www.facebook.com/AlmogBaku http://www.linkedin.com/in/almogbaku
But can we agree that the nodes should maintain the symlinks on /var/logs/containers/
@AlmogBaku I think the community is converging on agreement there. For CoreOS internal production use we've been carrying the /var/log/container
mount patch as well, so naturally I'm in favor. I'll have a PR up for this after #608 is sorted.
if someone is willing to deploy an aggregation solution he'll be able to?
kube-aws supports users of any gender ;)
For CoreOS internal production use we've been carrying the /var/log/container mount patch as well, so naturally I'm in favor. I'll have a PR up for this after #608 is sorted.
@colhom this PR has you beat: https://github.com/coreos/coreos-kubernetes/pull/650 :)
Moving stack from fleet to kubernetes. Previously I was using logstash and forwarding journalctl to it. What's the best way to accomplish centralized logging with a cluster built using kube-aws? Wanted to follow this guide http://kubernetes.io/docs/getting-started-guides/logging-elasticsearch/, but that's set on cluster creation.