Closed roivaz closed 3 months ago
@roivaz: Yes, the current node selector can cause a problem on HyperShift based deployments. Can you please file an RFE for the NetworkEdge team?
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
@roivaz: https://github.com/openshift/external-dns-operator/pull/219 relaxes the node selector by removing the master label.
@alebedev87 Thank you! That would definitely solve the issue. Any idea of the release this fix will be shipped in?
The deadline is the GA of OpenShift 4.17, so beginning of October. However this operator is off the main payload so it can be released earlier.
Closing the issue as the reporter confirmed https://github.com/openshift/external-dns-operator/pull/219 should fix the issue.
It would be nice if the field
spec.template.spec.nodeSelector
in the external-dns Deployment could be configurable by exposing this configuration via the ExternalDNS custom resource. I am using the external-dns-operator in a cluster provisioned via hypershift. In this scenario the scheduler is unable to run external-dns pods because there are no master nodes available, so the pods stay inpending
forever. The node selector is hardcoded here. The workaround I'm appying is to remove thenode-role.kubernetes.io/master: ''
label from the nodeSelector in the Deployment object, as the operator seems to not reconcile this.