Closed mohammedzee1000 closed 7 years ago
I had the same problem, here are some additional information.
The fluentd-es
was migrated to Daemon Set
in this commit.
For now I changed the raw link to release-1.5
. As DaemonSet propagates through nodes I think it won't be necessary to create this pod on all nodes anymore.
I hit this as well, using the patch from the PR allowed me to get around it.
Kubectl version: Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5e723f67f1e36d387a8a7faa6aa8a7f40cc9ca46", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5e723f67f1e36d387a8a7faa6aa8a7f40cc9ca46", GitTreeState:"clean"}
Environment:
uname -a
):Linux localhost 4.5.5-300.fc24.x86_64 #1 SMP Thu May 19 13:05:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
What happened:
What you expected to happen:
The ansible should not have failed at this point.
How to reproduce it:
Run the ansible script with appropriate inventory and setting "cluster_logging: true" in inventory/group_vars/all.yml
Anything else do we need to know:
I found that fluend-es is no longer available in kubernetes repository, instead being replaced by fluentd-gcp, located at the below url:
https://github.com/kubernetes/kubernetes/blob/master/cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml