Closed maxheyer closed 1 year ago
I noticed the same error, although I was able to create the cluster, so definitely your missing nodes are not related to that issue.
When a KubeVirtCluster
several updates are occurring, such as:
Just for the sake of context, the CAPI PatchHelper
is used, which simplifies the diff calculation of the resources: tl;dr; it's a helper with really good syntactic sugar to update partial objects in a simpler way.
When performing the first update the Patch Helper (on CAPI@v1.0.0) performs the following actions:
// Issue patches and return errors in an aggregate.
return kerrors.NewAggregate([]error{
// Patch the conditions first.
//
// Given that we pass in metadata.resourceVersion to perform a 3-way-merge conflict resolution,
// patching conditions first avoids an extra loop if spec or status patch succeeds first
// given that causes the resourceVersion to mutate.
h.patchStatusConditions(ctx, obj, options.ForceOverwriteConditions, options.OwnedConditions),
// Then proceed to patch the rest of the object.
h.patch(ctx, obj),
h.patchStatus(ctx, obj),
})
The patchStatusConditions
is the reason for getting your error ("KubevirtCluster.infrastructure.cluster.x-k8s.io \"testcluster\" is invalid: status.ready: Required value"
) and the issue is due to the markers for the /KubeVirtCluster/status/ready
field:
The status is marked as a required
one, and furthermore, it's missing a default value.
A possible workaround could be marking it as an optional
one, although IMHO the status should be always reported, nevertheless the initial conditions.
I'll open a MR to address this issue so we can discuss it with the developers on how to overcome this annoying and fake error.
What steps did you take and what happened: I just followed the quick start guide to set up cluster-api with the infrastructure provider kubevirt.
After trying to apply the testcluster config, the capk-controller-manager pod gives the following errors:
What did you expect to happen: I expected to see a test cluster running.
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] I also tried to use clusterapi-operator to make clusterapi work. But I ran into the same problem.
My clusterctl command to generate the testcluster config:
Environment:
kubectl version
): v1.27.3/etc/os-release
): Talos Linux v1.4.7/kind bug