karmada-io / karmada

Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
https://karmada.io
Apache License 2.0
4.39k stars 867 forks source link

failover feature-gate Cannot be closed correctly #5375

Open kubepopeye opened 1 month ago

kubepopeye commented 1 month ago

Please provide an in-depth description of the question you have:

I don't want karmada to trigger a failover when the cluster is unreachable, I tried to disable the feature-gate directly in karmada-controller and found that the failover still occurs!

What do you think about this question?: I went to look at the karmada implementation, the cluster-controller does add a judgement for failover feature-gate in the monitor place, but in the ttaintClusterByCondition method, there is a lack of judgement, which leads to the taint being hit, and ultimately leads to feature-gate failover feature-gate.

Environment:

whitewindmills commented 1 month ago

taintClusterByCondition only adds NoSchedule taints, which only affect scheduling.

kubepopeye commented 1 month ago
企业微信截图_ca74a2d1-ef4f-467e-8c83-00554c41388d
kubepopeye commented 1 month ago

So what's causing this problem and can you help answer? It's true that it's noSchedule. but it's still triggering the clearing of the orphaned work.

@whitewindmills

whitewindmills commented 1 month ago

the orphan works may be caused by multiple reasons. I cannot find the root cause by those comments. can you paste scheduler logs here?

kubepopeye commented 1 month ago

scheduler logs? I found that it seems to remove the resourcebind spec cluster only in the case of taint, which ends up causing findOrphan to be able to be found there.

whitewindmills commented 1 month ago

since you have disabled the failover feature, but karmada-scheduler might change their schduling result.

kubepopeye commented 1 month ago

since you have disabled the failover feature, but karmada-scheduler might change their schduling result.

Is this the expected correct behaviour, I've found that in some cases it can lead to an empty list of Cluster's APIEnablements, which ends up with disastrous consequences!

whitewindmills commented 1 month ago

no, but we're fixing it. ref: https://github.com/karmada-io/karmada/pull/5325 https://github.com/karmada-io/karmada/pull/5216

whitewindmills commented 1 month ago

I've found that in some cases it can lead to an empty list of Cluster's APIEnablements, which ends up with disastrous consequences!

have u ensured that's your root cause?

kubepopeye commented 1 month ago

I've found that in some cases it can lead to an empty list of Cluster's APIEnablements, which ends up with disastrous consequences!

have u ensured that's your root cause?

Yes, it seems to me that the shutdown of failover means that no migration of availability zones should take place, but it seems that there are some features here that cause failover-like behaviour to take place nonetheless.

kubepopeye commented 1 month ago

I've found that in some cases it can lead to an empty list of Cluster's APIEnablements, which ends up with disastrous consequences!

have u ensured that's your root cause?

If we are unlucky, the cluster-status-controller will clear the apiEnablements in the cluster status when the cluster goes offline, then the scheduler will step in and find no matching APIs, which in turn will cause rb's specification cluster to be cleared, and finally the binding controller will removeOrphan, causing the Our downstream resources are removed. The binding controller's removeOrphan causes our downstream resources to be deleted. This is the complete chain, so we still consider the failover implementation to be incomplete.

whitewindmills commented 1 month ago

first, there's nothing to do with FAILOVER. did you see your cluster failing? it's important, wrong APIENABLEMENTS are usually caused by network errors or failing APIService.

todo: grep such logs in your karmada-controller-manager.

kubepopeye commented 1 month ago
  • Failed to get any APIs installed in Cluster

yes,Failed to get any APIs installed in Cluster

企业微信截图_b69678f0-cbb8-4242-9a80-b414da0d8da9 image