Closed black-dragon74 closed 6 months ago
Hey! I am confused on how this is happening? Is this in an upgrade scenario? Or using an old tag with the latest playbook?
Hey! I am confused on how this is happening? Is this in an upgrade scenario? Or using an old tag with the latest playbook?
Happened to me, too. Scenario for me: changed the version for metallb in the all.yml in my inventory. Redeployed the cluster by running the playbook.
Hey! I am confused on how this is happening? Is this in an upgrade scenario? Or using an old tag with the latest playbook?
They changed the webhook service name starting with v0.14.4 (added metallb prefix) to more accurately denote where the service comes from.
Got it. Hmmm.... Not sure I want to make any changes since this is a point in time, otherwise we would have to account for every breaking change with every included dependency. We always ensure that the current configuration works on clean machines (since we run it in CI). Generally speaking (although it does work) upgrading using the playbook is kind of an edge case since we recommend using helm or gitops to handle this after the cluster is bootstrapped.
If you're saying that it's broken in the latest version of Metal LB, happy to review a PR for it!
Trying to fix this issue, faced it too. https://github.com/techno-tim/k3s-ansible/pull/528/files
While deploying the manifests for MetalLB version v0.14.4, the
wehbhook-service
is namedmetallb-webhook-service
.This causes the post task to fail with an error.
Posting it here for anyone else facing the same. I can send a PR for the same.
It would make sense to dynamically set a var for service name depending of the config values:
metal_lb_*_tag_version
.WDYT @timothystewart6 ?
Regards