metal3-io / metal3-docs

Architecture documentation that describes the components being built under Metal³.
http://metal3.io
Apache License 2.0
263 stars 111 forks source link

Definition of the HostClaim resource #408

Open pierrecregut opened 2 months ago

pierrecregut commented 2 months ago

The HostClaim resource is introduced to address multi-tenancy in Metal3 and the definition of hybrid clusters with cluster-api.

Introduces four scenarios representing the three main use cases and how pivot is made.

Presentation of the design will be addressed in another merge request.

metal3-io-bot commented 2 months ago

Hi @pierrecregut. Thanks for your PR.

I'm waiting for a metal3-io member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
dtantsur commented 2 months ago

/ok-to-test

pierrecregut commented 2 months ago

@lentzi90 I think I need to write down the alternative/risks sections right now. I have a draft but it is not ready yet (https://github.com/pierrecregut/metal3-docs/commit/5ed5f5f3118e85b6786708213dcbab842f1267da). I will push it here when it is ready and I will try to faithfully incorporate your approach.

pierrecregut commented 2 months ago

I have added the 'alternative solutions' part and 'risk and mitigation' so that it is more clear how HostClaim compare with alternative solutions. I think I have captured @lentzi90 proposal in "HostClaims as a right to consume BareMetalHosts" but I probably missed part of it. If some obvious alternatives are also missing it is time to add them. The advantage/drawbacks are biased and I am open to improvements. Unfortunately the size of the PR is now clearly XL.

pierrecregut commented 2 months ago

This means that CAPM3 would deal with disposable resource (HostClaims) through an API (the Kubernetes API of the cluster holding the BareMetalHosts) protected by credentials. This was the original goal of the architecture but then CAPM3 grew to become completely coupled to BMO and this is something I hope to fix now.

I completely agree with this shift from recyclable to disposable resource. The main question is where we expose the API and what is kept and manipulated as a resource and what is just message over a link.

One thing that is not clear to me is how you intend things to work with VMs instead of BareMetalHosts? Are you expecting to create a separate controller instead of BMO?

Yes and we have already done it with the Host resource in Kanod (which is a HostClaim with a few defects that we intend to correct for the final version). The implementation for kubevirt VM is https://gitlab.com/Orange-OpenSource/kanod/host-kubevirt, the version for having hosts on a remote cluster is https://gitlab.com/Orange-OpenSource/kanod/host-remote-operator. The implementation for BMH is in https://gitlab.com/Orange-OpenSource/kanod/host-operator but mixing the generic definition of the custom resource (with quota webhook) and the specific implementation for BMH was not necessarily the best idea.

lentzi90 commented 2 months ago

Ok I think I understand now how you want the HostClaim to work. Let me see if I can suggest something that would work both for the hybrid, non-CAPI case and for CAPI without making things too complicated.

This is what I want to avoid. It is too complicated, and I see no reason (from CAPI/CAPM3 perspective) to have the LocalClaim.


┌────────┐      
│Machine │      
├────────┴┐     
│M3Machine│     
├─────────┴┐    
│LocalClaim│    
└──────────┘    

┌─────────────┐ 
│RemoteClaim  │ 
├─────────────┤ 
├─────────────┤ 
│BareMetalHost│ 
└─────────────┘ 

Previously I suggested something like this, where the M3Machine fills the same need as the LocalClaim. I understand that this is not ideal for your use-case because there is no M3Machine then.

 ┌────────┐      
 │Machine │      
 ├────────┴┐     
 │M3Machine│     
 └─────────┘     

 ┌─────────────┐ 
 │RemoteClaim  │ 
 ├─────────────┤ 
 ├─────────────┤ 
 │BareMetalHost│ 
 └─────────────┘ 

Now I suggest instead this:

 ┌────────┐                                              
 │Machine │                                              
 ├────────┴┐         ┌─────────┐        ┌─────────┐      
 │M3Machine│         │HostClaim│        │HostClaim│      
 └────┬────┘         └────┬────┘        └────┬────┘      
      │                   │                  │           
      │                   │                  │           
 ┌────▼────────┐     ┌────▼────────┐    ┌────▼─────────┐ 
 │BMHClaim     │     │BMHClaim     │    │VMClaim       │ 
 ├─────────────┤     ├─────────────┤    ├──────────────┤ 
 ├─────────────┤     ├─────────────┤    ├──────────────┤ 
 │BareMetalHost│     │BareMetalHost│    │VirtualMachine│ 
 └─────────────┘     └─────────────┘    └──────────────┘ 

This way you can implement the HostClaim controller separately and get all functionality you want. For Metal3, we would add to BMO the BareMetalHostClaim. No separate controller is needed for this. BMO can reconcile them and match them with BareMetalHosts. CAPM3 would some similar parts to the HostClaim controller but only for BareMetalHostClaims. I don't see that we would make it work with VMs for example.

What do you think about this?

pierrecregut commented 2 months ago

This is what I want to avoid. It is too complicated, and I see no reason (from CAPI/CAPM3 perspective) to have the LocalClaim.

LocalClaim serves two purposes:

As an additional bonus point, I think it would give node-reuse a cleaner implementation because it means that we keep the host binding rather than creating a new one.

CAPM3 would some similar parts to the HostClaim controller but only for BareMetalHostClaims. I don't see that we would make it work with VMs for example.

The whole point of supporting hybrid is to make CAPM3 work with VM. And if the contract exposed by HostClaim is clear this is exactly what we have. With our PoC, we can create workload cluster with either control-plane or some of the machine deployments on kubevirt VM (we use multus and a bridge on each node of the management cluster to attach VM to the network of the baremetal part of target cluster). Then you can even change the compute resource kind of machine deployments between upgrades seamlessly.

If we are not sharing the CR on compute resources, I would rather avoid the BMHClaim/VMClaim parts. The only thing you need to know is for each user, a mapping from LocalClaim ids to BMH (credentials being associated to the user). The reason we have a claim on the remote part is that we distinguish the implementation of the wire protocol (and with it the notions of users) from the handling of compute resources.

In our implementation of remote HostClaim, we have a very crude model of user (just a token representing more or less a namespace). A production grade version would probably distinguish a declarative part (which cluster, which namespace) from the authenticated part (the user) and would provide interface to identity management frameworks (keycloak others) that handle the definition of users and credential protection. We have not looked seriously at that part and one of the advantage of our approach is that it was very easy to replace our Local HostClaim <-> Remote HostClaim controller... or not use any when BMH and clusters are in the same management cluster.

The compromise I suggested yesterday was rather the following where circles represent services not resources. The notion of user is not represented. The important part is that the dialog between HostClaim controller and BMH / VM servers is standardized.

graph TD;
   m3m1[Metal3Machine] --> hc1[HostClaim 1];
   m3m2[Metal3Machine] --> hc2[HostClaim 2];
   hc1 --> bms((BMH Server));
   hc3[HostClaim 3] --> bms;
   hc2 --> vms((VM server));
   hc4[HostClaim 4] --> vms;
   subgraph BareMetal Domain
   bms --hc1--> bmh1[BMH 1];
   bms --hc3--> bmh2[BMH 2];
   bms --> bmh3[BMH 3];
   end
   subgraph KubeVirt Domain
   vms --hc2--> vm1[VM 1];
   vms --hc4--> vm2[VM 2];
   end

A last warning is that the pivot of the bootstrap cluster which is still simple when we have M3M, Host and BMH in the same namespace during initialization becomes much more complex if we always have a service that migrates.

lentzi90 commented 2 months ago

I'm not convinced that we need a separate HostClaim controller. That is why I see no issue with duplicating the work of that controller in CAPM3.

If the whole point is to get CAPM3 to work with VMs and especially KubeVirt, what about this? It seems like a very tempting alternative to me. We just add support for BMCs in KubeVirt and the whole Metal3 stack can work with it directly. This is what I hope to use for our CI in the future.

The multi-tenancy / decoupling of CAPM3 from the BMHs is still very much needed, but for that I see no need for the LocalClaim. The Metal3Machines are supposed to fill that same function, just like OpenStackMachines does in CAPO. They are the local resources that the user can inspect.

pierrecregut commented 2 months ago

Kubevirt is just an example. You can add a BMC to most VM. This exists for Openstack for testing purpose for a long time. Sylva project has already developed an equivalent of the mentioned blueprint for Kubvirt for their CI but going directly to the libvirt layer: : https://gitlab.com/sylva-projects/sylva-elements/container-images/libvirt-metal#libvirt-metal . Unfortunately Redfish/IPMI manages hardware, it does not create it. So we completely loose the advantages of being a disposable resource or we need again a notion of pools synchronized with cluster size. Things quickly become very complex again and that is probably why nobody has pushed it further. Networking and VM definitions are really mixed in Kubevirt and as networking must be reconfigured for each cluster, pre-definition of VM is also harder.

lentzi90 commented 2 months ago

Ok, makes sense. So you will need some code in CAPM3 to make it possible. (Yes it could be in the HostClaim controller instead, but I hope you see why we may want to include that in CAPM3/BMO directly.)

I think the core issue remaining here is about how the HostClaim is bound to the BareMetalHost (or VM). If I understand correctly you would like a HostClaim controller (HCC) that is independent of BMO and CAPM3. It would bind the HostClaims to BMHs or VMs and propagate status back. I think that should be part of BMO directly. It would be such an integral part of how to work with BMO.

I'll be back with more comments on how to bind the HostClaim later

pierrecregut commented 2 months ago

If I understand correctly you would like a HostClaim controller (HCC) that is independent of BMO and CAPM3. It would bind the HostClaims to BMHs or VMs and propagate status back.

In the proposal, there is no piece of code that handles both BMH and VM at the same time. I will try to summarize what I think our mental models. Please correct me what I misrepresent. If we forget about the single cluster case where my proposal did not need an intermediate protocol, the core of what we agree on is:

graph TD;
  Metal3Machine --> i1[...];
  i1 -->p[/REST API for Host: ↓ selector + workload -  ↑ resource status + most metadata/];
  p --> c((server));
  c --> i2[...];
  i2 --> BareMetalHost

Rectangles are custom resources. Parallelograms are API and circles, http endpoint implementing the API.

My intial proposal was:

graph TD;
   Metal3Machine --> hc0[HostClaim - remote];
   hc0 --> p[/REST API/];
   p --> c((HostClaim server));
   c --> hc1[HostClaim - baremetal];
   c --> hc2[HostClaim - kubevirt];
   hc1 --> BareMetalHost;
   hc2 --> vm[Kubevirt VM];

The word after HostClaim is something that can act as a compute kind selector for controllers. To do the selection on the K8S APIserver side, a label should be used but it is just a code optimization. We have as many controllers as there are selectors.

If I undestand well, your proposal is rather something like:

graph TD;
  Metal3Machine --> p[/REST API/];
  p --> c((BMH server));
  c --> BareMetalHost;

You have a resource between the BMH server and the BareMetalHost, but if I am not wrong, it only either uniquely identify a user or create a unique password for the BareMetalHost depending on the way you restrict the access. Note that you probably want a notion of user to limit the scope of "watch" (something you need to get a single channel to be notified of status changes). With the user approach, I would just use a custom resource representing the user as consumerRef and use the name of the BMH as id (I don't think it reveals anything secret to the other end).

A custom resource hides both an API and the K8S apiserver that implements this API and delegates actions to the resource controller watching the resource.

So, what I am not sure about is if, in the above drawing, the pair protocol and server is kubernetes apiserver (you mentionned directly using kubeconfig at some point) or if, as recommended in k8s documentation, you implement your own server. With Kubernetes RBAC, it may be difficult to restrict the access by a given user to only the BMH it owns unless you have separate namespaces.

If we agree on a public, versionned API, I think we can gain back more or less what is needed for hybrid clusters and non k8s workload with:

graph TD;
  Metal3Machine --> p[/REST API/];
  Metal3Host --> p;
  p --> c((BMH server));
  c --> BareMetalHost;
  p --> c2((VM server));
  c2 --> vm[Kubevirt VM];

There is no real difference between Metal3Host and HostClaim - remote from a previous drawing (I have used the name given in a previous design proposal).

lentzi90 commented 2 months ago

Sorry for my slow response. There are too many other things on-going. I think we can get away with something simpler still.

With Kubernetes RBAC, it may be difficult to restrict the access by a given user to only the BMH it owns unless you have separate namespaces.

Yes, to get proper isolation we would need separate namespaces. However, that does not mean we will necessarily be limited to separate BMH pools. I suggest BMO would be responsible to associating BMHs to HostClaims. This means that we can give the power to control how that happens to the same entity that creates the BMHs.

flowchart LR;
  CAPM3 --creates--> c[HostClaim];
  BMO--associates--> c;
  c --associated with--> BareMetalHost;
  c --approved by--> BareMetalHostPolicy;

This way, the BMO owner can write a policy to approve or deny HostClaim associations. (I stole this idea from cert-manager approver policies.) They can allow associations from multiple namespaces to get a common pool or they can allow only one specific namespace. We could even the policies more fine grained if needed.

pierrecregut commented 2 months ago

Yes, to get proper isolation we would need separate namespaces. However, that does not mean we will necessarily be limited to separate BMH pools. I suggest BMO would be responsible to associating BMHs to HostClaims. This means that we can give the power to control how that happens to the same entity that creates the BMHs.

If we have separate namespaces and the HostClaim resource contains the requirements and the selectors, I think that is exactly the HostClaim resource as it is described. The only question where your suggested implementation and our differs is the location of the controller. I totally agree for putting the controller in the BMO project. The only reason we did it differently is that we tried to limit places where we forked an existing metal3 project in the scope of the PoC.

pierrecregut commented 2 months ago
  • CAPM3 user needs access to create HostClaims in the cluster where the BMHs are

I think that is the real point where we differed. In the case of cross cluster, you assume that the user will somehow give a kubeconfig with a restricted scope in a secret to the capm3 controller. I guess that if we are in the same cluster, we could just assume that the HostClaim must be created in the same namespace as the M3M and in that case we are again back to the existing proposal.

We differ because:

A colleague suggested implementing a proxy. That would be more or less transparent to the capm3 side and would solve our security concern. The proxy would have full privileges on hostclaim resources in any namespace. It would replace user identities with its own after verifying that the user is entitled to do the request it asks for. That way we are sure that we do not create a security hole that goes beyond the scope of Host and BMH management.

I will modify the proposal to introduce the kubeconfig credential that will be used in place of the standard service account. It could use the namespace defined in the context as target namespace. We will keep the implementation of a proxy in mind to avoid giving an agent too many rights when we want to automate the management of users.

pierrecregut commented 2 months ago

This way, the BMO owner can write a policy to approve or deny HostClaim associations. (I stole this idea from cert-manager approver policies.) They can allow associations from multiple namespaces to get a common pool or they can allow only one specific namespace. We could even the policies more fine grained if needed.

BMOpolicies are closer to quota on pods than cert managers policies: quantitative clearly associated to a user/namespace. That is why Laurent first restricted HostQuotas to namespaces using k8s quota implementation as a reference (https://gitlab.com/Orange-OpenSource/kanod/host-quota). Openshift has a notion of cluster wide quotas for projects but I believe that it expresses constraints on each namespace.

lentzi90 commented 2 months ago

I think that is the real point where we differed. In the case of cross cluster, you assume that the user will somehow give a kubeconfig with a restricted scope in a secret to the capm3 controller. I guess that if we are in the same cluster, we could just assume that the HostClaim must be created in the same namespace as the M3M and in that case we are again back to the existing proposal.

We can default to use "in-cluster credentials" (i.e. service account and RBAC). It would not have to be in the same namespace but it would probably make sense for single cluster scenarios anyway.

we do not rely on kubeconfig. We want to automate user and cluster creation. We do not want to have an entity that can create a role over any namespace and a user. This is equivalent to have a controller with full admin power. But my implementation is not great in the sense that it reimplements a lot of apiserver in a hacky way.

I struggle to understand this. If you create the cluster you are usually cluster admin anyway? An entity that manages the users does not need to be coupled to Kubernetes API and RBAC depending on what you use for authentication. You could handle the users separately and use OIDC or JWT with user groups to get correct privileges in the cluster.

I don't mean that we have to use kubeconfigs directly, but considering that the format is well tested and contains everything we need I think it is the obvious choice. We can consider other alternatives of course, but BMO provides a Kubernetes API and then kubeconfig files makes a lot of sense.

A colleague suggested implementing a proxy. That would be more or less transparent to the capm3 side and would solve our security concern. The proxy would have full privileges on hostclaim resources in any namespace. It would replace user identities with its own after verifying that the user is entitled to do the request it asks for. That way we are sure that we do not create a security hole that goes beyond the scope of Host and BMH management.

Not sure I understand this. To me it sounds like the proxy is the security issue here, with full access to all BMHs through HostClaims. Who would control this proxy? Where would it run? How would authentication work and is there a reason to do that in the proxy instead of in the Kubernets API server?

I will modify the proposal to introduce the kubeconfig credential that will be used in place of the standard service account. It could use the namespace defined in the context as target namespace.

Sounds good!

BMOpolicies are closer to quota on pods than cert managers policies: quantitative clearly associated to a user/namespace. That is why Laurent first restricted HostQuotas to namespaces using k8s quota implementation as a reference (https://gitlab.com/Orange-OpenSource/kanod/host-quota). Openshift has a notion of cluster wide quotas for projects but I believe that it expresses constraints on each namespace.

I'm not after limiting the amount or type of hosts at this point. I do see the usefulness of that and we should definitely add it. However, I think we must first make sure that authorization is in place. Not "you cannot use so much", rather "you have no access here".

I think there are two scenarios, with one special case:

  1. Full isolation: BMHs are grouped per user in separate namespaces or even clusters. BMO/BMHs potentially configured to only associate HostClaims in the same namespace. If they are in separate clusters there is no need for that.
  2. Common pool(s): A pool of BMHs in one namespace are shared by multiple users, but not all. A policy is needed to tell BMO what HostClaims are allowed to be associated. E.g. BMHs in namespace A can be associated with HostClaims in namespaces X, Y and Z. No other HostClaims are allowed.
  3. Free for all: All BMHs are in a single namespace with no limitations on what HostClaims can be associated. This is a special case of scenario 2.
pierrecregut commented 2 months ago

The case I want to address is when you have an infrastructure wide system for managing users and identities. You create a new user X that should be able to describe a new cluster in management cluster A using BareMetalHosts in cluster B. How do I automate the creation of a namespace and the associated kubeconfig in B without a process that have full admin rights in B which should not be strictly necessary.

The idea was to have just a proxy in cluster B that has only rights on HostClaims (so more or less what a HostClaim controller already has) and that uses the centralized authentication/authorization management system to check if the credentials it has received can be used to perform an action in a specific namespace. We still need some glue around to create and distribute such credentials but at least there is a path where we do not create new kubernetes user and a RoleBinding (to a cluster role granting control over HostClaims).

The issue can be left open for the moment as it can be addressed separately.

pierrecregut commented 2 months ago

However, I think we must first make sure that authorization is in place. Not "you cannot use so much", rather "you have no access here".

I think this is addressed by section Security Impact of Making BareMetalHost Selection Cluster-wide with the use of an annotation on BareMetalHost. One of the advantage is that it is mandatory, so when we upgrade a deployment that did not use HostClaim, there is no risk that BareMetalHost are bound to HostClaims in another namespace.

pierrecregut commented 2 months ago

I have pushed a new commit that should reflect the fact that capm3 controllers handle the access to the remote cluster using a kubeconfig. There is a new scenario and I have modified goals/non goals. My approach with a "remote" HostClaim is now an alternative implementation.

pierrecregut commented 2 months ago

I have modified the last commit. I overlooked the fact that HostClaim uses secrets to define the cloud-init configuration although those secrets are an important part of the protocol implemented for "remote" HostClaim. This means that the role binding must give admin rights over secrets in the remote namespace.

Rozzii commented 1 month ago

It is a very long proposal and discussion so bare with me if I have missed something :D . First of all thank you for driving this discussion!

pierrecregut commented 1 month ago
  • One topic would be the multytenancy where I agree with the approach of @lentzi90 that we would need a combination of "HostClaim" paired with a sort of "Policy object" to facilitate and indirection. Indirection would be implemented such a way that neither the CAPM3 nor the BMO would have access to each others Namespaces but they would have access to the Namespace where the HostClaim is created and based on the "Policy object" BMO would either pair the claim with a BMH or not.

I mostly agree on this part. The scope of the Policy object is more complex. Hardware owners need to restrict the access to their BMH objects. Cluster owners want to to control on which hardware they are deployed. Finally at some point we will also need a way to do accounting and restrict the resources a cluster owner can acquire. If we forget the last point we still have two stake owners with different needs and one object. We also want to make sure that servers used to build clusters without hostclaim are not automatically promoted to be used by hostclaims from other namespaces. So I still do not know what is the right Policy object. In the proposal, only the first part is handled with an annotation on BMH to authorize the other namespaces to which they can be bound. The content could be a regular expression, or something like a list with catchall expansion.

pierrecregut commented 1 month ago
  • I do not like the "Proxy approach", that proxy would be a "third party" with very high privileges.

The proxy does not need higher privileges than the regular controller (view over BMH and... their associated resources). Without the proxy at some point, you will need to create kubernetes users and their associated roles. This is a high privilege task and it needs to be automated at some point. I think it is safer to implement a custom protocol (or at least have a very custom proxy that restrict secrets to a given type) because the associated resources of the BMH are secrets and it may be hard to restrict which secrets can be viewed by the proxy but still a proxy is better than an agent creating roles on the fly.

In any case, we do not necessarily need it in the first implementation. Some tasks will be manual...

pierrecregut commented 1 month ago
  • The second topic is whether we should allow CAPM3 to create M3Machines on top of VMs via the "HostClaim", I very much oppose this approach. I oppose it for multiple reason, Metal3 has a very specific scope of managing e2e life cycle of bare metal machines, there are CAPI providers for VMware, Kubevirt, BYOH and other providers that manage either "BM hosts that don't require life cycle management" or VMs.

  • I understand that using more than 1 provider for creating 1 homogeneous cluster is not trivial or even fully supported use-case in the CAPI ecosystem, but extending Metal3 to handle VMs is not the way to solve this problem IMO.

  • My counter proposal would be that if you need a Multy tenant environment where users can create clusters from a homogeneous set of BMHs based on VMs and BMs then let's implement the "Multy tenancy" first, then for your specific use-case you should use something called "Virtual Baremetal solution". Virtual baremetal is supported already with Openstack via Sushy-tools and it is planned to be supported in kubevirt (https://github.com/kubevirt/community/blob/main/design-proposals/kubevirtbmc.md). Without going into the implementation details of the different virtual baremetal solutions the main takeaway is that from BMO perspective these virtual baremtal solutions provide BMC addresses and they emulate the behavior of BMCs thus BMO can handle the VMs the same way as it would handle regular BMs such a way that no extra logic is needed in the Metal3 stack.

As long as hostclaims are supported for the multi tenancy approach, the proposal does not "extend" Metal3 beyond the support for a rather opaque field "kind" that can be used to choose which controller handles which hostclaim. The important point is that HostClaim implemented correctly necessarily extract the life-cycle management part out of the capm3 provider.

In the previous answers, I already explained why "virtual baremetal" is a complex solution that requires a lot of additional parts like a kind of autoscaler to have the right number of VMs with the appropriate characteristics. Also Sushy used as a virtual BMC has always been considered as somewhat experimental and only good for CI from a security perspective. And the fact that other providers "know" how to handle the network does not help.

There is an additional reason: having capm3 as a unique "node manager" simplifies the life of the cluster managers. There are difference in the way that capm3, capo, capk handle things such as node reuse, node validation, etc. and these difference do not provide anything useful to projects like Sylva or Kanod that try to provide clusters on demand on various compute resources.

lentzi90 commented 1 month ago

How about we split out the CAPI multi-tenancy contract to its own proposal? I could push that hopefully this week already. Perhaps it will be easier then to make progress on HostClaims here separately? I would focus the multi-tenancy proposal on just implementing the CAPI contract, without any HostClaim or changes to how CAPM3 interacts with BMHs. That should be a smaller change and with it in place I hope it will be easier to reason about the HostClaims here.

lentzi90 commented 1 month ago

Pushed the multi-tenancy proposal: https://github.com/metal3-io/metal3-docs/pull/429

metal3-io-bot commented 3 weeks ago

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign zaneb for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files: - **[OWNERS](https://github.com/metal3-io/metal3-docs/blob/main/OWNERS)** - **[design/OWNERS](https://github.com/metal3-io/metal3-docs/blob/main/design/OWNERS)** Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment
pierrecregut commented 3 weeks ago

I have pushed a new version with the following modifications: