kubernetes-sigs / kubespray

Deploy a Production Ready Kubernetes Cluster
Apache License 2.0
16.16k stars 6.48k forks source link

DNS Resolution not working correctly #7869

Closed nasseur closed 2 years ago

nasseur commented 3 years ago

Hello,

After creating the cluster using kubespray, the dns resolution was not working in all pods. We have update the kubelet-config file to point to the ip of coredns instead of the localdns in all master nodes and it's worked. But we have noticed a delay on the dns resolution, it"s about 5s which impact our performance testing. Could you please advise on this matter? Below the issue we faced before changing the ip coredns in master nodes image

The delay of 5s is described below image

Environment: -Hardward configuration:

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8" REDHAT_BUGZILLA_PRODUCT_VERSION=8.3 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="8.3"

--- workder nodes: Linux 3.10.0-1062.el7.x86_64 x86_64 NAME="Red Hat Enterprise Linux Server" VERSION="7.7 (Maipo)" ID="rhel" ID_LIKE="fedora" VARIANT="Server" VARIANT_ID="server" VERSION_ID="7.7" PRETTY_NAME="Red Hat Enterprise Linux" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:7.7:GA:server" HOME_URL="https://www.redhat.com/" BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7" REDHAT_BUGZILLA_PRODUCT_VERSION=7.7 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="7.7"

Kubespray version (commit) (git rev-parse --short HEAD): bcf69591

Network plugin used: calico

Full inventory with variables (ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"):

master1 | SUCCESS => { "msg": "Hello world!" } master3 | SUCCESS => { "msg": "Hello world!" } master2 | SUCCESS => { "msg": "Hello world!" } node1 | SUCCESS => { "msg": "Hello world!" } node3 | SUCCESS => { "msg": "Hello world!" } node2 | SUCCESS => { "msg": "Hello world!" } node4 | SUCCESS => { "msg": "Hello world!" } node5 | SUCCESS => { "msg": "Hello world!" } elk1 | SUCCESS => { "msg": "Hello world!" } elk2 | SUCCESS => { "msg": "Hello world!" } elk3 | SUCCESS => { "msg": "Hello world!" }

Command used to invoke ansible: ansible-playbook --private-key=/root/.ssh/id_rsa -i inventory/cluster-perf/hosts.yml --limit=node1,node2,node3 --become --become-user=root cluster.yml -vvvvv --timeout 180

Output of ansible run:

Anything else do we need to know:

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-ci-robot commented 2 years ago

@k8s-triage-robot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/kubespray/issues/7869#issuecomment-1008180816): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues and PRs according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue or PR with `/reopen` >- Mark this issue or PR as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.