Closed andelhie closed 5 years ago
What is the thing that fails and what would you expect to happen instead?
The role that updates both yum fastmirror and yum proxy settings are not being applied at all. It says it doing it but it is not.
This is not visible from the gist that you uploaded (it states it didn't even do anything, so the line is probably already present in that file) - sorry, but I can't help you with that with the given information.
So I had to go manually and enter the lines after it failed so many time. I can remove the lines and test again.
Andy del Hierro (andelhie) SR Engineer Infrastructure Strategy GIS-SDO-TS-US
andelhie@cisco.commailto:andelhie@cisco.com 408 894 5202
On Apr 22, 2019, at 1:22 PM, MarkusTeufelberger notifications@github.com<mailto:notifications@github.com> wrote:
This is not visible from the gist that you uploaded (it states it didn't even do anything, so the line is probably already present in that file) - sorry, but I can't help you with that with the given information.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/kubernetes-sigs/kubespray/issues/4541#issuecomment-485539186, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ACAP2WEAZTPMXEXWGMAJKETPRYM7BANCNFSM4HGNMMMA.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
Environment:
Cloud provider or hardware configuration: AWS EC2
OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
): Admin node Linux 3.10.0-957.5.1.el7.x86_64 x86_64 NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/"CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7"
Node OS Linux 3.10.0-514.21.1.el7.x86_64 x86_64 NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7"
ansible --version
): ansible 2.7.10 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/kubespray/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Oct 30 2018, 23:45:53) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]Kubespray version (commit) (
git rev-parse --short HEAD
): 7f1d9ffNetwork plugin used: Calico
Copy of your inventory file: [all] ip-10-250-192-41.us-west-2.compute.internal ansible_host=10.250.192.41 ip-10-250-193-115.us-west-2.compute.internal ansible_host=10.250.193.115 ip-10-250-194-35.us-west-2.compute.internal ansible_host=10.250.194.35 ip-10-250-192-243.us-west-2.compute.internal ansible_host=10.250.192.243 ip-10-250-193-103.us-west-2.compute.internal ansible_host=10.250.193.103 ip-10-250-194-140.us-west-2.compute.internal ansible_host=10.250.194.140 ip-10-250-192-41.us-west-2.compute.internal ansible_host=10.250.192.41 ip-10-250-193-115.us-west-2.compute.internal ansible_host=10.250.193.115 ip-10-250-194-35.us-west-2.compute.internal ansible_host=10.250.194.35
[bastion] bastion ansible_host=34.221.16.227
[kube-master] ip-10-250-192-41.us-west-2.compute.internal ip-10-250-193-115.us-west-2.compute.internal ip-10-250-194-35.us-west-2.compute.internal
[kube-node] ip-10-250-192-243.us-west-2.compute.internal ip-10-250-193-103.us-west-2.compute.internal ip-10-250-194-140.us-west-2.compute.internal
[etcd] ip-10-250-192-41.us-west-2.compute.internal ip-10-250-193-115.us-west-2.compute.internal ip-10-250-194-35.us-west-2.compute.internal
[k8s-cluster:children] kube-node kube-master
[k8s-cluster:vars] apiserver.xx.xx.amazon.com
Command used to invoke ansible: ansible-playbook -i ../multicloud-aws-terraform/hosts ./cluster.yml -e ansible_user=centos -e cloud_provider=aws -e bootstrap_os=centos -e ansible_ssh_private_key_file=~/.ssh/mypemkey.pem -b --flush-cache -e ansible_ssh_host=x.x.x.x -vvv
Output of ansible run:
https://gist.github.com/andelhie/5fdc2df33621eee315430948369a1423
Anything else do we need to know: