Closed nunnatsa closed 6 months ago
/ok-to-test
Changes Missing Coverage | Covered Lines | Changed/Added Lines | % | ||
---|---|---|---|---|---|
controllers/kubevirtmachine_controller.go | 3 | 4 | 75.0% | ||
pkg/kubevirt/machine.go | 56 | 61 | 91.8% | ||
<!-- | Total: | 59 | 65 | 90.77% | --> |
Totals | |
---|---|
Change from base Build 7226166058: | 1.2% |
Covered Lines: | 1008 |
Relevant Lines: | 1591 |
/hold
need to complete unit tests
/unhold
One high level comment is that I want to make sure the capi
Machine
object get's the condition messages from the KubeVirtMachine object.
For this KubeVirtMachine
status:
{
"conditions": [
{
"lastTransitionTime": "2023-12-19T07:26:28Z",
"message": "0 of 2 completed",
"reason": "DVNotReady",
"severity": "Info",
"status": "False",
"type": "Ready"
},
{
"lastTransitionTime": "2023-12-19T07:28:28Z",
"message": "DataVolume capi-quickstart-md-0-d7xjr-capi-quickstart-boot-volume import is not running: DataVolume too small to contain image",
"reason": "DVNotReady",
"severity": "Info",
"status": "False",
"type": "VMProvisioned"
}
],
"ready": false
}
We get this Machine
(capi) status (only the Reason
field get here, without the message):
{
"bootstrapReady": true,
"conditions": [
{
"lastTransitionTime": "2023-12-19T07:26:28Z",
"message": "1 of 2 completed",
"reason": "DVNotReady",
"severity": "Info",
"status": "False",
"type": "Ready"
},
{
"lastTransitionTime": "2023-12-19T07:26:18Z",
"status": "True",
"type": "BootstrapReady"
},
{
"lastTransitionTime": "2023-12-19T07:26:28Z",
"message": "0 of 2 completed",
"reason": "DVNotReady",
"severity": "Info",
"status": "False",
"type": "InfrastructureReady"
},
{
"lastTransitionTime": "2023-12-19T07:24:36Z",
"reason": "WaitingForNodeRef",
"severity": "Info",
"status": "False",
"type": "NodeHealthy"
}
],
"lastUpdated": "2023-12-19T07:26:18Z",
"observedGeneration": 2,
"phase": "Provisioning"
}
What does the corresponding Machine object and MachineDeployment have in it's status for these KubeVIrtMachine conditions?
And the MachineDeployment
status is (no reason nor message is reflected here):
{
"conditions": [
{
"lastTransitionTime": "2023-12-19T07:24:36Z",
"message": "Minimum availability requires 1 replicas, current 0 available",
"reason": "WaitingForAvailableMachines",
"severity": "Warning",
"status": "False",
"type": "Ready"
},
{
"lastTransitionTime": "2023-12-19T07:24:36Z",
"message": "Minimum availability requires 1 replicas, current 0 available",
"reason": "WaitingForAvailableMachines",
"severity": "Warning",
"status": "False",
"type": "Available"
}
],
"observedGeneration": 1,
"phase": "ScalingUp",
"replicas": 1,
"selector": "cluster.x-k8s.io/cluster-name=capi-quickstart,cluster.x-k8s.io/deployment-name=capi-quickstart-md-0",
"unavailableReplicas": 1,
"updatedReplicas": 1
}
However, the capi Cluster
shows nothing:
{
"conditions": [
{
"lastTransitionTime": "2023-12-19T07:26:11Z",
"status": "True",
"type": "Ready"
},
{
"lastTransitionTime": "2023-12-19T07:26:11Z",
"status": "True",
"type": "ControlPlaneInitialized"
},
{
"lastTransitionTime": "2023-12-19T07:26:11Z",
"status": "True",
"type": "ControlPlaneReady"
},
{
"lastTransitionTime": "2023-12-19T07:24:41Z",
"status": "True",
"type": "InfrastructureReady"
}
],
"infrastructureReady": true,
"observedGeneration": 2,
"phase": "Provisioned"
}
Comparing the conditions with the current version (main branch)
capi Machine InfrastructureReady
condition (the only change):
Main:
{
"lastTransitionTime": "2023-12-19T08:13:21Z",
"message": "0 of 2 completed",
"reason": "WaitingForBootstrapData",
"severity": "Info",
"status": "False",
"type": "InfrastructureReady"
},
PR:
{
"lastTransitionTime": "2023-12-19T07:26:28Z",
"message": "0 of 2 completed",
"reason": "DVNotReady",
"severity": "Info",
"status": "False",
"type": "InfrastructureReady"
},
No change in MachineDeployment
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: davidvossel, nunnatsa
The full list of commands accepted by this bot can be found here.
The pull request process is described here
What this PR does / why we need it
Be more verbose when VM is not scheduled. Add meaningful reasons and messages to the KubeVirtMachine conditions, to be reflected in the Cluster resources.
Also, make the
KubeVirtMachine.status.ready
field to be printed also if itfalse
, and adding it as a new printed column named "Rwady".