karmada-io / karmada

Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
https://karmada.io
Apache License 2.0
4.12k stars 806 forks source link

sync crd from member to karmada control plane #1470

Closed merryzhou closed 1 year ago

merryzhou commented 2 years ago

https://github.com/karmada-io/karmada/issues/419

syncing CRD from member clusters to the Karmada control plane at the joining phase
karmada-bot commented 2 years ago

Welcome @merryzhou! It looks like this is your first PR to karmada-io/karmada 🎉

RainbowMango commented 2 years ago

Seems this approach is syncing CRD from member clusters to the Karmada control plane at the joining phase?

merryzhou commented 2 years ago

Seems this approach is syncing CRD from member clusters to the Karmada control plane at the joining phase?

yes

RainbowMango commented 2 years ago

Before looking into the code, can you help declare the user story (what are the benefits to users)?

merryzhou commented 2 years ago

Before looking into the code, can you help declare the user story (what are the benefits to users)?

suppose there is a member cluster with volcano installed,so there are a lot related crds。And this member cluster was joined to karmada control plane by karmadactl join ... command. Now I want to create a vcjob to karmada control plane, but unfortunately I get an error due to lack of crd, I must create all these crds to karmada control plane manually。

RainbowMango commented 2 years ago

Yeah, that makes sense.

Should we make this configurable? something like introducing a flag to specify if user want to sync CRDs during join process. How to deal with clusters in Pull mode? These clusters are joined by karmada-agent. How to deal with CRDs differences between member clusters? E.g. cluster1 installed CRD v1alpha1 and cluster2 installed CRD v1beta1, how to merge them?

prodanlabs commented 2 years ago

Should we make this configurable? something like introducing a flag to specify if user want to sync CRDs during join process. How to deal with clusters in Pull mode? These clusters are joined by karmada-agent. How to deal with CRDs differences between member clusters? E.g. cluster1 installed CRD v1alpha1 and cluster2 installed CRD v1beta1, how to merge them?

It's a cool feature that our team has discussed.

Our thoughts at the time were: Synchronize the crds of the Member cluster to karmada through an agent, and create the corresponding ClusterPropagationPolicy.

For the same version, it can be synchronized from a Member cluster, and other Member clusters using this version of crds are added to ClusterPropagationPolicy.

For different versions, such as member1 uses v1beta1 and member2 uses v1, both v1beta1 and v1 are synchronized to karmada, and the crds coexist.

In this way, karmada has a higher priority and is more convenient to manage.

But in the end, we upgraded the crds uniformly because of the business needs, so there is no such problem.

RainbowMango commented 2 years ago

@prodanlabs @merryzhou We have the Community Meeting this afternoon, can you present and let's have a quick talk there.? If yes, please add an agenda to it. (Never mind if you can't present, we can still discuss here)

prodanlabs commented 2 years ago

hi @merryzhou it's up to you to add the agenda?

merryzhou commented 2 years ago

ok, I'd like to discuss it at the meeting, but I have no right to add an agenda

RainbowMango commented 2 years ago

ok, I'd like to discuss it at the meeting, but I have no right to add an agenda

You do. By joining the google groups you will be able to edit the meeting notes. Join google group mailing list: https://groups.google.com/forum/#!forum/karmada

I can help to add the agenda this time.

prodanlabs commented 2 years ago

Sorry, something went wrong and I had to leave the meeting. My question is, use join or promote to sync crds, which is better.

prodanlabs commented 2 years ago

Synchronizing crds is done in join, which is more convenient for users.

Hi @lonelyCZ ,in promote, is it also possible?

lonelyCZ commented 2 years ago

Hi @lonelyCZ ,in promote, is it also possible?

I was just thinking about it, but I don't think this is the right one

  1. The promote can add CRD of member cluster to Karmada control plane, but it can completely take over it, if we delete it from the control plane, the CRD of member cluster also can be deleted.
  2. Executing promote one by one, which is too complicated.
  3. It is more suitable for workload resources.
prodanlabs commented 2 years ago

Hi @lonelyCZ ,in promote, is it also possible?

I was just thinking about it, but I don't think this is the right one

  1. The promote can add CRD of member cluster to Karmada control plane, but it can completely take over it, if we delete it from the control plane, the CRD of member cluster also can be deleted.
  2. Executing promote one by one, which is too complicated.
  3. It is more suitable for workload resources.

Ok, doing it in join would be better than promote. thx

lonelyCZ commented 2 years ago

On the other hand, if we only want to sync one crd, we can use the universal way

[root@master67 lonelyCZ]# karmadactl get crd workloads.workload.example.io -C member1 -o yaml | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/workloads.workload.example.io created
RainbowMango commented 2 years ago

Kindly ping @merryzhou, what's the progress now?

merryzhou commented 2 years ago

Kindly ping @merryzhou, what's the progress now?

sorry for the late reply, the pr is ready now

RainbowMango commented 2 years ago

cc @lonelyCZ Could you please help with this?

lonelyCZ commented 2 years ago

cc @lonelyCZ Could you please help with this?

Ok, I will review it.

lonelyCZ commented 2 years ago

/assign @lonelyCZ

lonelyCZ commented 2 years ago

I just tested it that worked fine. But it report too many warns, perhaps, we can eliminate there warns depend on specific version of apiserver of member cluster.

[root@master67 karmada]# ./karmadactl join member2 --cluster-kubeconfig=/root/.kube/config --cluster-context=karmada-host --sync-crd
W0428 12:27:49.966041  917639 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0428 12:27:50.061981  917639 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0428 12:27:50.086165  917639 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0428 12:27:50.124662  917639 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0428 12:27:50.158733  917639 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0428 12:27:50.187025  917639 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0428 12:27:50.201521  917639 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0428 12:27:50.246636  917639 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0428 12:27:50.263918  917639 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0428 12:27:50.425419  917639 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
Total customResourceDefinitions count: 9
Sync 9 customResourceDefinition succeed: certificaterequests.cert-manager.io, certificates.cert-manager.io, etcdbackups.etcd.phil-sun.io, etcdclusters.etcd.phil-sun.io, orders.acme.cert-manager.io, challenges.acme.cert-manager.io, clusterissuers.cert-manager.io, issuers.cert-manager.io, workloads.workload.example.io
cluster(member2) is joined successfully

And we could add a example to use --sync-crd in karmadactl join -h

[root@master67 karmada]# ./karmadactl join -h
Join registers a cluster to control plane.

Usage:
  karmadactl join CLUSTER_NAME --cluster-kubeconfig=<KUBECONFIG> [flags]

Examples:

# Join cluster into karamada control plane
karmadactl join CLUSTER_NAME --cluster-kubeconfig=<KUBECONFIG>
merryzhou commented 2 years ago

@lonelyCZ

I modified the implementation to minimize warnings,and an example added also.

PTAL, Thanks

lonelyCZ commented 2 years ago

Great, test is right in v1.19.1 member cluster, but I don't have tested it in below v1.16.

Do you test it in below v1.16 member cluster? @merryzhou

[root@master67 karmada]# ./karmadactl join member2 --cluster-kubeconfig=/root/.kube/config --cluster-context=karmada-host --sync-crd
Total customResourceDefinitions count: 9
Sync 6 customResourceDefinition succeed: certificaterequests.cert-manager.io, certificates.cert-manager.io, challenges.acme.cert-manager.io, clusterissuers.cert-manager.io, issuers.cert-manager.io, orders.acme.cert-manager.io
Skip 3 customResourceDefinitions: etcdbackups.etcd.phil-sun.io,etcdclusters.etcd.phil-sun.io,workloads.workload.example.io
cluster(member2) is joined successfully
karmada-bot commented 1 year ago

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files: Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment
lonelyCZ commented 1 year ago

Hi @merryzhou , why close this pr, I think it is useful for specific scenario.