Closed nasseur closed 2 years ago
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
Hello,
After creating the cluster using kubespray, the dns resolution was not working in all pods. We have update the kubelet-config file to point to the ip of coredns instead of the localdns in all master nodes and it's worked. But we have noticed a delay on the dns resolution, it"s about 5s which impact our performance testing. Could you please advise on this matter? Below the issue we faced before changing the ip coredns in master nodes
The delay of 5s is described below
Environment: -Hardward configuration:
CPU: 16 vCPU
Memory: 30G
OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
): --- master nodes: Linux 4.18.0-240.el8.x86_64 x86_64 NAME="Red Hat Enterprise Linux" VERSION="8.3 (Ootpa)" ID="rhel" ID_LIKE="fedora" VERSION_ID="8.3" PLATFORM_ID="platform:el8" PRETTY_NAME="Red Hat Enterprise Linux 8.3 (Ootpa)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:8.3:GA" HOME_URL="https://www.redhat.com/" BUG_REPORT_URL="https://bugzilla.redhat.com/"REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8" REDHAT_BUGZILLA_PRODUCT_VERSION=8.3 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="8.3"
--- workder nodes: Linux 3.10.0-1062.el7.x86_64 x86_64 NAME="Red Hat Enterprise Linux Server" VERSION="7.7 (Maipo)" ID="rhel" ID_LIKE="fedora" VARIANT="Server" VARIANT_ID="server" VERSION_ID="7.7" PRETTY_NAME="Red Hat Enterprise Linux" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:7.7:GA:server" HOME_URL="https://www.redhat.com/" BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7" REDHAT_BUGZILLA_PRODUCT_VERSION=7.7 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="7.7"
ansible --version
): ansible 2.9.20python --version
): Python 3.6.8Kubespray version (commit) (
git rev-parse --short HEAD
): bcf69591Network plugin used: calico
Full inventory with variables (
ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"
):master1 | SUCCESS => { "msg": "Hello world!" } master3 | SUCCESS => { "msg": "Hello world!" } master2 | SUCCESS => { "msg": "Hello world!" } node1 | SUCCESS => { "msg": "Hello world!" } node3 | SUCCESS => { "msg": "Hello world!" } node2 | SUCCESS => { "msg": "Hello world!" } node4 | SUCCESS => { "msg": "Hello world!" } node5 | SUCCESS => { "msg": "Hello world!" } elk1 | SUCCESS => { "msg": "Hello world!" } elk2 | SUCCESS => { "msg": "Hello world!" } elk3 | SUCCESS => { "msg": "Hello world!" }
Command used to invoke ansible: ansible-playbook --private-key=/root/.ssh/id_rsa -i inventory/cluster-perf/hosts.yml --limit=node1,node2,node3 --become --become-user=root cluster.yml -vvvvv --timeout 180
Output of ansible run:
Anything else do we need to know: