Closed RamLavi closed 2 years ago
/hold need to check pods module as well
/hold cancel pods are not relevant for this issue
I wonder about the approach altogether; afaiu, how the mutating webhook handles failures depend on its configuration - specifically, the value of its failurePolicy
.
IIUC, this approach will only work if the failure policy is fail
; since we own the webhook, that should not be an issue. Never the less, this might deserve a call out in the release notes: this approach is tied to the fail
failure policy - which is the mutating webhook's default.
I wonder about the approach altogether; afaiu, how the mutating webhook handles failures depend on its configuration - specifically, the value of its
failurePolicy
.IIUC, this approach will only work if the failure policy is
fail
; since we own the webhook, that should not be an issue. Never the less, this might deserve a call out in the release notes: this approach is tied to thefail
failure policy - which is the mutating webhook's default.
/hold
The release notes should be updated indicating what is being fixed, and under what conditions - i.e. the MutatingWebhookConfiguration
cannot be updated / must feature the fail
failure policy.
I wonder about the approach altogether; afaiu, how the mutating webhook handles failures depend on its configuration - specifically, the value of its
failurePolicy
.IIUC, this approach will only work if the failure policy is
fail
; since we own the webhook, that should not be an issue. Never the less, this might deserve a call out in the release notes: this approach is tied to thefail
failure policy - which is the mutating webhook's default.
Well this webhook implementation has always relied on the fact that the failurePolicy
is fail
. nothing changed in that regard. Moreover, the check we are adding is already being done, only later, in the webhook validation phase.
So, considering what you say, we have 2 options here:
IMO I'm not too excited about option 2. I prefer to fail instead of ignoring the issue.. what do you think @maiqueb @qinqon ?
I wonder about the approach altogether; afaiu, how the mutating webhook handles failures depend on its configuration - specifically, the value of its
failurePolicy
. IIUC, this approach will only work if the failure policy isfail
; since we own the webhook, that should not be an issue. Never the less, this might deserve a call out in the release notes: this approach is tied to thefail
failure policy - which is the mutating webhook's default.Well this webhook implementation has always relied on the fact that the
failurePolicy
isfail
. nothing changed in that regard. Moreover, the check we are adding is already being done, only later, in the webhook validation phase. So, considering what you say, we have 2 options here:
- we can fail the VM on our webhook, saying we cannot deal with duplicate interface names, due to our current implementation (this is what we do in this PR)
- we can ignore the VM on our webhook (without assigning MACs, thus avoiding the side effect), letting the VM fail in the validation part later on.
IMO I'm not too excited about option 2. I prefer to fail instead of ignoring the issue.. what do you think @maiqueb @qinqon ?
Fail fast is always better.
I wonder about the approach altogether; afaiu, how the mutating webhook handles failures depend on its configuration - specifically, the value of its
failurePolicy
. IIUC, this approach will only work if the failure policy isfail
; since we own the webhook, that should not be an issue. Never the less, this might deserve a call out in the release notes: this approach is tied to thefail
failure policy - which is the mutating webhook's default.Well this webhook implementation has always relied on the fact that the
failurePolicy
isfail
. nothing changed in that regard. Moreover, the check we are adding is already being done, only later, in the webhook validation phase. So, considering what you say, we have 2 options here:
Right. Which means the mutating webhook mutates, and the validating webhook validates. So far so good.
- we can fail the VM on our webhook, saying we cannot deal with duplicate interface names, due to our current implementation (this is what we do in this PR)
This is making the mutating webhook validate. Which is something that - imo - belongs on the validating webhook. I won't pretend that I am versed enough in webhook design patterns to claim that validating stuff in the mutating webhook is an anti-pattern, but it surely feels semantically weird. That's why I advocated out of band for the reconcile loop to sort this inconsistency.
- we can ignore the VM on our webhook (without assigning MACs, thus avoiding the side effect), letting the VM fail in the validation part later on.
And let eventually the reconcile loop correct the state. IMO, this is more aligned with the kubernetes design model.
IMO I'm not too excited about option 2. I prefer to fail instead of ignoring the issue.. what do you think @maiqueb @qinqon ?
Fail fast is always better.
... Now the key thing is (quoting @RamLavi) :
Well this webhook implementation has always relied on the fact that the
failurePolicy
isfail
. nothing changed in that regard.
Right, meaning whatever implicit rules I'm trying to uphold were already broken. No point arguing about them now; your proposed fix seems more in-line with the current design of this component.
Please detail the fix in the release notes - afaiu, you're changing the behavior of the webhook. Furthermore ... shouldn't you also delete this particular validation from the validating webhook ? It will be redundant now.
EDIT: I now realize the validating webhook mentioned above is part of KubeVirt's virt-api
- meaning it can be used without kubemacpool. As such, it is not redundant.
@maiqueb: changing LGTM is restricted to collaborators
/lgtm /approve
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: qinqon
The full list of commands accepted by this bot can be found here.
The pull request process is described here
/hold cancel
What this PR does / why we need it: When applying a VM with multiple interfaces with the same name, the VM is rejected but kubemacpool is mishandling the MACs, causing a side effect where that MAC is no longer usable (it is "taken" by the ghost VM). This commit fixes this by checking and rejecting VMs with duplicate interface names on kmp webhook context.
Special notes for your reviewer:
Release note: