kubevirt / kubesecondarydns

DNS for KubeVirt VirtualMachines secondary interfaces
Apache License 2.0
7 stars 8 forks source link

sync: Label the control-plane as worker #40

Closed oshoval closed 1 year ago

oshoval commented 1 year ago

Since the NodePort is created for only workers, label the control plane node as worker. So we can use multi nodes if needed. The dns traffic is forwarded only to the first node by kubevirtci, so we need the nodeport to serve also for the first node.

None
kubevirt-bot commented 1 year ago

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign alonakaplan for approval by writing /assign @alonakaplan in a comment. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files: - **[OWNERS](https://github.com/kubevirt/kubesecondarydns/blob/main/OWNERS)** Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment
oshoval commented 1 year ago

@AlonaKaplan do we want this or to close it ?

(disregard the failure here, not related)

oshoval commented 1 year ago

@brianmcarey Hi

something is wrong with memory on CI maybe ? https://prow.ci.kubevirt.io/view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubesecondarydns/40/pull-kubesecondarydns-e2e-k8s/1599681646913654784

There are no nodes that your pod can schedule to - check your requests, tolerations, and node selectors (0/14 nodes are available: 1 node(s) had taint {ci.kubevirt.io/cachenode: true}, that the pod didn't tolerate, 10 Insufficient memory, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.)

We also see dnf errors 137 which can say OOM on other PRs https://github.com/kubevirt/project-infra/pull/2501

Thanks

brianmcarey commented 1 year ago

@oshoval - I had a quick look and I couldn't see anything worrying in the metrics. The jobs from the project-infra PR run on a different cluster from the failure here so I don't think they are related.

oshoval commented 1 year ago

@oshoval - I had a quick look and I couldn't see anything worrying in the metrics. The jobs from the project-infra PR run on a different cluster from the failure here so I don't think they are related.

Thanks Brian

AlonaKaplan commented 1 year ago

We currently have only one node (and I don't see any reason to have more than one in the future). Therefore, the change in the PR is not required.

oshoval commented 1 year ago

We just need then at least that QE will test that we listen on both nodes, or even try it manually one time imo.