kubernetes-retired / contrib

[EOL] This is a place for various components in the Kubernetes ecosystem that aren't part of the Kubernetes core.
Apache License 2.0
2.46k stars 1.68k forks source link

Fluentd setup broken in ansible setup. #2159

Closed mohammedzee1000 closed 7 years ago

mohammedzee1000 commented 7 years ago

Kubectl version: Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5e723f67f1e36d387a8a7faa6aa8a7f40cc9ca46", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5e723f67f1e36d387a8a7faa6aa8a7f40cc9ca46", GitTreeState:"clean"}

Environment:

Linux localhost 4.5.5-300.fc24.x86_64 #1 SMP Thu May 19 13:05:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

What happened:

...
TASK [node : Setting the kubelet_modified fact to true] ************************
ok: [192.168.121.128]
ok: [192.168.121.138]

TASK [node : Install fluentd pod into each node] *******************************
fatal: [192.168.121.128]: FAILED! => {"changed": false, "dest": "/etc/kubernetes/manifests", "failed": true, "gid": 0, "group": "root", "mode": "0755", "msg": "Request failed", "owner": "root", "response": "HTTP Error 404: Not Found", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 4096, "state": "directory", "status_code": 404, "uid": 0, "url": "https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/saltbase/salt/fluentd-es/fluentd-es.yaml"}
fatal: [192.168.121.138]: FAILED! => {"changed": false, "dest": "/etc/kubernetes/manifests", "failed": true, "gid": 0, "group": "root", "mode": "0755", "msg": "Request failed", "owner": "root", "response": "HTTP Error 404: Not Found", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 4096, "state": "directory", "status_code": 404, "uid": 0, "url": "https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/saltbase/salt/fluentd-es/fluentd-es.yaml"}
    to retry, use: --limit @/home/moahmed/Work/kubernetes/contrib/ansible/playbooks/deploy-cluster.retry

What you expected to happen:

The ansible should not have failed at this point.

How to reproduce it:

Run the ansible script with appropriate inventory and setting "cluster_logging: true" in inventory/group_vars/all.yml

Anything else do we need to know:

I found that fluend-es is no longer available in kubernetes repository, instead being replaced by fluentd-gcp, located at the below url:

https://github.com/kubernetes/kubernetes/blob/master/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml

lucas-cegatti commented 7 years ago

I had the same problem, here are some additional information.

The fluentd-es was migrated to Daemon Set in this commit.

https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml

For now I changed the raw link to release-1.5. As DaemonSet propagates through nodes I think it won't be necessary to create this pod on all nodes anymore.

dustymabe commented 7 years ago

I hit this as well, using the patch from the PR allowed me to get around it.