Open lentzi90 opened 1 month ago
Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all
Credentials can be put either at two levels:
With the first approach you have chosen, there can only be one source of baremetal machines, with the second you can have as many sources as machine deployments and you can smoothly transition between sources by changing the credentials.
It can be interesting to have a cluster that spans over different regions for redundancy. It is also useful in some edge scenarios where you can have a cluster with a centralized CP and many locations with potentially many baremetal providers.
As far as I know there is no explicit requirement in CAPI multi-tenancy contract that there is a single set of credentials.
Interesting point. I'm open to add the same identityRef to the Metal3MachineTemplate also. In fact, this would be identical to how CAPO does it. In practice I don't think it changes the proposal much. We can default the identityRef for the Metal3Machines to what is set on the Metal3Cluster and then allow the user to override that in the Metal3MachineTemplate.
The title and the user stories give the impression that the proposal solves the multi-tenancy problem. It is true that two clusters can be defined in two namespaces and point to one single namespace of baremetalhosts. But the user of those clusters will have full power over those BMHs and the content of the BMH is equivalent to the content of the cluster. Unless you have something like HostClaims hiding the baremetalhost namespaces, you cannot share BMH with users you do not trust. In fact, if there are good use cases for this remote access for HostClaim, I do not really see any for BMH.
The implementation will be tricky. We need to maintain a watch over a namespace in a cluster generating events in another cluster and we need to know when to remove this watch. Probably when the kubeconfig secret is deleted.
Some fields in the status are copied back. But the dataTemplate mechanism also relies on labels and annotations. I expect that they will not be copied and that the controller will use the kubeconfig to get the actual values. Is that the case ?
From the CAPI perspective this does solve multi-tenancy. Each tenant provide credentials for accessing their BareMetalHosts, similar to how it would work with cloud resources. If two tenants share the same BareMetalHosts, then that is essentially one tenant. This is also the same as for other providers. If you have access to the same OpenStack, AWS or GCP tenant then you also have control over the same cloud resources.
I understand that this does not solve the same issue as the HostClaim, but I see this as essential. It also does not block implementing HostClaims before or after. as long as it is done with concideration. That was my goal with this, to make the design of the HostClaim easier by extracting this important part.
/approve
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: Rozzii
The full list of commands accepted by this bot can be found here.
The pull request process is described here
/hold I want to make sure people have a chance to check this and comment before we merge
Two weeks have passed and we have discussed it offline. I think it is safe to remove the hold /hold cancel
/test ?
/test ?
@lentzi90: The following commands are available to trigger required jobs:
/test markdownlint
/test shellcheck
/test spellcheck
Use /test all
to run the following jobs that were automatically triggered:
markdownlint
spellcheck
This adds a proposal for how to implement the CAPI multi-tenancy contract for CAPM3.