kubernetes-sigs / kubespray

Deploy a Production Ready Kubernetes Cluster
Apache License 2.0
16.21k stars 6.5k forks source link

Centos bootstrap roles not working mostly fastmirror disable and proxy #4541

Closed andelhie closed 5 years ago

andelhie commented 5 years ago

Environment:

CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7"

Node OS Linux 3.10.0-514.21.1.el7.x86_64 x86_64 NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7"

Kubespray version (commit) (git rev-parse --short HEAD): 7f1d9ff

Network plugin used: Calico

Copy of your inventory file: [all] ip-10-250-192-41.us-west-2.compute.internal ansible_host=10.250.192.41 ip-10-250-193-115.us-west-2.compute.internal ansible_host=10.250.193.115 ip-10-250-194-35.us-west-2.compute.internal ansible_host=10.250.194.35 ip-10-250-192-243.us-west-2.compute.internal ansible_host=10.250.192.243 ip-10-250-193-103.us-west-2.compute.internal ansible_host=10.250.193.103 ip-10-250-194-140.us-west-2.compute.internal ansible_host=10.250.194.140 ip-10-250-192-41.us-west-2.compute.internal ansible_host=10.250.192.41 ip-10-250-193-115.us-west-2.compute.internal ansible_host=10.250.193.115 ip-10-250-194-35.us-west-2.compute.internal ansible_host=10.250.194.35

[bastion] bastion ansible_host=34.221.16.227

[kube-master] ip-10-250-192-41.us-west-2.compute.internal ip-10-250-193-115.us-west-2.compute.internal ip-10-250-194-35.us-west-2.compute.internal

[kube-node] ip-10-250-192-243.us-west-2.compute.internal ip-10-250-193-103.us-west-2.compute.internal ip-10-250-194-140.us-west-2.compute.internal

[etcd] ip-10-250-192-41.us-west-2.compute.internal ip-10-250-193-115.us-west-2.compute.internal ip-10-250-194-35.us-west-2.compute.internal

[k8s-cluster:children] kube-node kube-master

[k8s-cluster:vars] apiserver.xx.xx.amazon.com

Command used to invoke ansible: ansible-playbook -i ../multicloud-aws-terraform/hosts ./cluster.yml -e ansible_user=centos -e cloud_provider=aws -e bootstrap_os=centos -e ansible_ssh_private_key_file=~/.ssh/mypemkey.pem -b --flush-cache -e ansible_ssh_host=x.x.x.x -vvv

Output of ansible run:

https://gist.github.com/andelhie/5fdc2df33621eee315430948369a1423

Anything else do we need to know:

MarkusTeufelberger commented 5 years ago

What is the thing that fails and what would you expect to happen instead?

andelhie commented 5 years ago

The role that updates both yum fastmirror and yum proxy settings are not being applied at all. It says it doing it but it is not.

MarkusTeufelberger commented 5 years ago

This is not visible from the gist that you uploaded (it states it didn't even do anything, so the line is probably already present in that file) - sorry, but I can't help you with that with the given information.

andelhie commented 5 years ago

So I had to go manually and enter the lines after it failed so many time. I can remove the lines and test again.

Andy del Hierro (andelhie) SR Engineer Infrastructure Strategy GIS-SDO-TS-US

andelhie@cisco.commailto:andelhie@cisco.com 408 894 5202

On Apr 22, 2019, at 1:22 PM, MarkusTeufelberger notifications@github.com<mailto:notifications@github.com> wrote:

This is not visible from the gist that you uploaded (it states it didn't even do anything, so the line is probably already present in that file) - sorry, but I can't help you with that with the given information.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/kubernetes-sigs/kubespray/issues/4541#issuecomment-485539186, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ACAP2WEAZTPMXEXWGMAJKETPRYM7BANCNFSM4HGNMMMA.

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 5 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 5 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/kubespray/issues/4541#issuecomment-533324702): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.