Open schlakob opened 7 months ago
Hello. Because up until now we preferred to stay on older CAPI (v1alpha1) version. The rumors say maybe we want to update, but that's not currently on the table.
This would then result in conflicts when running a CAPI and machine-controller in the same cluster. Is or will there be a way to overwrite the group for the machine-controller.
If the Kubermatic stack ever upgrades to newer CAPI versions and adopts newer API groups, it would probably integrate into CAPI as a set of providers. I don't think this would really create a conflict since CAPI is pluggable by design.
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
@kubermatic-bot: Closing this issue.
/reopen
@kron4eg: Reopened this issue.
Hi,
we are currently using the machine-controller as an addon of Kubeone, in its default configuration.
I was wondering why the CRDs of CAPI (machinedeployments, machinesets, machines) are of the cluster.k8s.io api group and not of the upstream CAPI cluster.x-k8s.io api group? I noticed, that CAPI switched from k8s.io to x-k8s.io a while ago.
Are there plans using the same api group as upstream CAPI in future?
For us this is important, because we are thinking of deploying a CAPI in the Kubeone cluster running the machine-controller, therefore we would like to ensure, that no conflicts regarding the CRDs occur.