kubernetes-sigs / cluster-api-provider-kubevirt

Cluster API Provider for KubeVirt
Apache License 2.0
108 stars 61 forks source link

Use an alternative machine bootstrap flag probing strategy (no SSH) #230

Closed BarthV closed 3 days ago

BarthV commented 1 year ago

What steps did you take and what happened:

At our company, we are building an infrastructure "highly secure" with several contraints imposed by French/European sovereign Cloud Label. Running these constraints made us locate CAPI management cluster in a specific network and managed clusters in other ones.

Working around these rules, we recently tried to block all the traffic between CAPI cluster network and target managed clusters networks (and also disabled SSH daemons in all our VMs). In order to schedule and manage clusters lifecycle, we expected CAPI/CAPK to only reach managed cluster's apiserver using exposed loadbalanced endpoints, which is open the rest of the network using underlying Kubevirt LB capabilities.

In fact we discovered (here) that CAPK requires a direct SSH access to the VM IP in order to validate CAPI Machine bootstrap success (using CAPI sentinel file convention). Also, this seems to be the unique SSH command I've found in the whole CAPK source code.

At the end with this restriction, CAPK is never able to correctly provision a single kubevirt VM, because VM bootstrap is never acknowledged.

$ kubectl get machine -n capknossh 
NAME                            CLUSTER   NODENAME   PROVIDERID   PHASE          AGE     VERSION
capknossh-cp-lxpzz              capknossh                         Provisioning   9m41s   v1.26.2
capknossh-wk-6996b7555c-98sgs   capknossh                         Pending        9m41s   v1.26.2
capknossh-wk-6996b7555c-fhm8z   capknossh                         Pending        9m41s   v1.26.2

The CAPI specification leaves the infrastructure providers free to choose the sentinel file verification methodology. So I'd like to start a topic and try finding solutions to avoid doing such SSH connections, which are a very sensitive topic for us. At the end, I'd love to have a more "read only" & auditable/secure way to check Machine bootstrap status

Possible anwsers could be :

  1. Add a flag to completely skip SSH bootstrap file check

    • "Always return true" / bypass sentinel file check: We consider VM always bootstraped...it might break CAPI sentinel contract , and probably produce undesireable side-effects for further reconciliation loops (but I could also work after several retries, this might bé thé simplest solution).
  2. Use cloud-init to inject a simple HTTP daemon into the VM (and reserve a port for it, in the same way that current SSH daemon). And use this endpoint to poll bootstrap status

    • this http server would only serve Sentinel file (and might also use a simple password , or even stay clear)
    • CAPK would poll sentinel file content though it.
    • since the VM is supposed to always run a kubelet whatever the VM OS, this web service could perfectly be configured using a static pod.
  3. Use any other less sensitive strategy or protocol to expose and retrieve CAPI sentinel file (remote kubevirt pod exec, or any other smart idea ...)

What did you expect to happen: In order to comply "governement-tier" security rules, we'd expect CAPK not to use any SSH remote access to check machine bootstrap success. Allowing a single component to hold SSH keys & reach every VM of the kube infrastructure breaks our required legal security compliance. We think that retrieving the sentinel file status should rather be done with a read-only remote strategy using a less "privileged" & interactive protocol than SSH.

Environment:

/kind bug [One or more /area label. See https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt/labels?q=area for the list of labels]

BarthV commented 1 year ago

Since this issue is a huge blocker for us, I'd be really happy to discuss on this topic and actively work with you to find any possible Sentinel File Check alternative / optional strategy.

BarthV commented 1 year ago

This comment seems to show that the authors are aware of the limitation posed by the SSH strategy : Is it possible that you have already thought about a generic substitute check for it ?

func (m *Machine) SupportsCheckingIsBootstrapped() bool {
    // Right now, we can only check if bootstrapping has
    // completed if we are using a bootstrapper that allows
    // for us to inject ssh keys into the guest.

    if m.sshKeys != nil {
        return m.machineContext.HasInjectedCapkSSHKeys(m.sshKeys.PublicKey)
    }
    return false
}

I'm working now on a proposal to (first) allow CAPK to optionally check a generic http endpoint for every VM (using TLS 1.3 / PSK key for each VM, replicating the same model used for SSH keys ). And define the a new (optional) TLS contract between CAPK and the VM.

We'll (first) skip the server side implementation & injection in the VM, and only focus on a CAPK check feature. Letting the server side up to integration teams & final users.

BarthV commented 1 year ago

Hey, i'm back after a (not so) long silence :) !

We spent some time investigating more ways to poll Sentinel file status, and I think we have now a really elegant candidate to propose . With no SSH involved, no HTTP either or any remote call to the VM to make it work !

If you dive enough into kubevirt you'll see that it provides some sort of direct probing into the VM : guest-agent ping & exec probes. It relies on the existence of qemu-guest-agent in the VM ( kv guest agent ). By using this feature, the pod wrap & relays the probe exection though virt-launcher pod up to the VM.

My proposal is now to make CAPK controller asking kubevirt apiserver to execute the exact same kind of check to probe the Sentinel File right into the VM. Everything is handled by the apiserver with nothing else involved. This is really Simple & elegant IMO.

You can already validate the feasibility by running virt-probe cli directly inside the running virt-launcher pod : (and it works)

bash-5.1$ virt-probe --command cat /run/cluster-api/bootstrap-success.complete --domainName kaas_capiovn-cp-2dnzx                 

success

And it's also perfectly working directly with kubectl exec :

> kubectl exec -n kaas virt-launcher-capiovn-cp-2dnzx-hp9zj -- virt-probe --command cat /run/cluster-api/bootstrap-success.complete --domainName kaas_capiovn-cp-2dnzx 

success

Do you think this approach is relevant ? Let's talk about this on next community syncup meeting.

Milestones to be achieved to make this proposal OK :

k8s-triage-robot commented 12 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

bzub commented 7 months ago

Another use-case: We are interested in using the talos bootstrap/control-plane providers with this infrastructure provider. Since Talos does not use SSH any dependency on SSH would be a hurdle for this idea.

BarthV commented 7 months ago

maybe it's time to go forward on this topic and finally remove any ssh requirement in capk ?

bzub commented 7 months ago

For now I am setting checkStrategy: none in my KubevirtMachineTemplates which has allowed me to continue trying out Talos + kubevirt provider.

k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

agradouski commented 5 months ago

/remove-lifecycle rotten

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 3 days ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 3 days ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt/issues/230#issuecomment-2211194934): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.