Open kubepopeye opened 1 month ago
taintClusterByCondition
only adds NoSchedule
taints, which only affect scheduling.
So what's causing this problem and can you help answer? It's true that it's noSchedule. but it's still triggering the clearing of the orphaned work.
@whitewindmills
the orphan works may be caused by multiple reasons. I cannot find the root cause by those comments. can you paste scheduler logs here?
scheduler logs? I found that it seems to remove the resourcebind spec cluster only in the case of taint, which ends up causing findOrphan to be able to be found there.
since you have disabled the failover feature, but karmada-scheduler might change their schduling result.
since you have disabled the failover feature, but karmada-scheduler might change their schduling result.
Is this the expected correct behaviour, I've found that in some cases it can lead to an empty list of Cluster's APIEnablements, which ends up with disastrous consequences!
no, but we're fixing it. ref: https://github.com/karmada-io/karmada/pull/5325 https://github.com/karmada-io/karmada/pull/5216
I've found that in some cases it can lead to an empty list of Cluster's APIEnablements, which ends up with disastrous consequences!
have u ensured that's your root cause?
I've found that in some cases it can lead to an empty list of Cluster's APIEnablements, which ends up with disastrous consequences!
have u ensured that's your root cause?
Yes, it seems to me that the shutdown of failover means that no migration of availability zones should take place, but it seems that there are some features here that cause failover-like behaviour to take place nonetheless.
I've found that in some cases it can lead to an empty list of Cluster's APIEnablements, which ends up with disastrous consequences!
have u ensured that's your root cause?
If we are unlucky, the cluster-status-controller will clear the apiEnablements in the cluster status when the cluster goes offline, then the scheduler will step in and find no matching APIs, which in turn will cause rb's specification cluster to be cleared, and finally the binding controller will removeOrphan, causing the Our downstream resources are removed. The binding controller's removeOrphan causes our downstream resources to be deleted. This is the complete chain, so we still consider the failover implementation to be incomplete.
first, there's nothing to do with FAILOVER. did you see your cluster failing? it's important, wrong APIENABLEMENTS are usually caused by network errors or failing APIService.
todo: grep such logs in your karmada-controller-manager.
Failed to get any APIs installed in Cluster
Maybe get partial
if you find them, that's the case.
- Failed to get any APIs installed in Cluster
yes,Failed to get any APIs installed in Cluster
Please provide an in-depth description of the question you have:
I don't want karmada to trigger a failover when the cluster is unreachable, I tried to disable the feature-gate directly in karmada-controller and found that the failover still occurs!
What do you think about this question?: I went to look at the karmada implementation, the cluster-controller does add a judgement for failover feature-gate in the monitor place, but in the
ttaintClusterByCondition
method, there is a lack of judgement, which leads to the taint being hit, and ultimately leads to feature-gate failover feature-gate.Environment: