rossf7 / carbon-aware-karmada-operator

A Kubernetes operator to automate carbon-aware spatial shifting of workloads using Karmada
Apache License 2.0
8 stars 0 forks source link

Document requirements for managed kubernetes providers to be able to join a member cluster for customers to ask for #5

Open mrchrisadams opened 1 year ago

mrchrisadams commented 1 year ago

Hi Ross,thanks for making this!

Would it be possible to list what APIs a provider of kubernetes needs to implement or expose for it to be possible to controllable by an existing control plane cluster?

I'll try outlining the use case I'm thinking of.

Running a control plane cluster youself, and connecting to managed k8s providers from multiple suppliers

You're operating a control plane cluster with Supplier A, and rather than spinning up a cluster that you manage yourself with provider B or provider C, you just want to purchase a managed service from them, the same way you might purchase managed object storage.

Different suppliers have different specialisations, or emissions profiles, and you don't want to interface at a low level, provisioning VMs yourself, but use the higher level APIs afforded by the provider.

This seems to be inline with the goal of Karmada, based on their docs:

Karmada supports:

  • Safe isolation:
    • Create a namespace for each cluster, prefixed with karmada-es-.
  • Multi-mode connection:
    • Push: Karmada is directly connected to the cluster kube-apiserver.
    • Pull: Deploy one agent component in the cluster, Karmada delegates tasks to the agent component.
  • Multi-cloud support(Only if compliant with Kubernetes specifications):
    • Support various public cloud vendors.
    • Support for private cloud.
    • Support self-built clusters.

Source: Key Features | karmada by

And I'm aware that Karmada has two ways to connect to an existing provider:

  1. Push: Karmada is directly connected to the cluster kube-apiserver.
  2. Pull: Deploy one agent component in the cluster, Karmada delegates tasks to the agent component.

However, I'm not clear what a checking service might look like, so you could validate that an existing k8s provider might be compatible, such that you could use them to run a control plane cluster OR a member cluster, just by sending some API calls to a given endpoint.

If it helps, this notebook explains the idea, but for object storage:

https://nextjournal.com/greenweb/demo-fetching-files-from-aws-compatible-object-storage?change-id=DM16eoHhgQyxDMnCqhXuah

For some further context, I'm interested in this, because it would be really cool to be able to add this kind of information to our directories of providers at the Green Web Foundation, so you could easily find API compatible providers of all kind of building blocks you might use when creating digitial services. The link below might be helpful context too, in this case:

https://www.thegreenwebfoundation.org/directory/services-offered/

rossf7 commented 1 year ago

Hi Chris, thanks, this is an interesting idea.

AIUI the member cluster has to have a full k8s API. It will have the-built in resources like configmaps and deployments and may have a set of CRDs that extend the k8s API but these can be bootstrapped and managed by karmada.

It differs from an AWS API like S3 because if say only the deployments API was exposed then creating related RBAC resources like roles would fail. However you can restrict which resources the user can manage using RBAC.

However, I'm not clear what a checking service might look like, so you could validate that an existing k8s provider might be compatible, such that you could use them to run a control plane cluster OR a member cluster, just by sending some API calls to a given endpoint.

There are automated conformance tests which most of the providers run and submit to CNCF for verification. https://www.cncf.io/certification/software-conformance/

To be able to join a member cluster you need to have a kubeconfig and network connectivity to that cluster. Usually for managed services they let you download or create a new kubeconfig.

Different suppliers have different specialisations, or emissions profiles, and you don't want to interface at a low level, provisioning VMs yourself, but use the higher level APIs afforded by the provider.

The managed services vary quite a bit. In some they provision the nodes for you but you can still access them. In others you can only access the worker nodes. You could even potentially use this with virtual kubelet that uses Functions-as-service instead of running the pods on worker nodes.