karmada-io / karmada

Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
https://karmada.io
Apache License 2.0
4.35k stars 861 forks source link

Encountered issues while installing scheduler estimator #5207

Closed Schwarao closed 1 month ago

Schwarao commented 1 month ago

image What is this error? image

chaosi-zju commented 1 month ago

This should be the problem that the secret names of our different installation methods are inconsistent.

You might installed karmada by karmadactl, could you try karmadactl method to install scheduler-estimator?

$ karmadactl addons enable karmada-scheduler-estimator --cluster=ali --member-kubeconfig ~/.kube/config --member-context aliyun
Schwarao commented 1 month ago

This should be the problem that the secret names of our different installation methods are inconsistent.

You might installed karmada by karmadactl, could you try karmadactl method to install scheduler-estimator?

$ karmadactl addons enable karmada-scheduler-estimator --cluster=aliyun --member-kubeconfig ~/.kube/config --member-context aliyun

Yes, I installed Karmada using the VNet Karmada tool, so do I need to delete these four pods first

chaosi-zju commented 1 month ago

Yes, I installed Karmada using the VNet Karmada tool, so do I need to delete these four pods first

which four pods?

You does need to delete Deployment/karmada-scheduler-estimator-aliyun and Service/karmada-scheduler-estimator-aliyun in karmada-system namespace

chaosi-zju commented 1 month ago

@XiShanYongYe-Chang What do you think about this problem? Do we need to regard it as a bug and fix it?

Supposing we installed karmada by karmadactl or helm, we cann't use hack/deploy-scheduler-estimator.sh to install estimator because some secret name between different installation methods is inconsistent.

same problem may exist between other installation methods.

Schwarao commented 1 month ago

你怎么看这个问题?我们是否需要将其视为错误并修复它?

假设我们通过 karmadactl 或 helm 安装了 karmada,我们不能用于安装估算器,因为不同安装方法之间的某些秘密名称不一致。hack/deploy-scheduler-estimator.sh

其他安装方法之间可能存在相同的问题。

Yes, this is a problem, and it seems that the installation guide does not provide connections for other installation methods

XiShanYongYe-Chang commented 1 month ago

Thanks~

Firstly, I don't think this is a problem, but it would be best if we could standardize the naming across different installation methods.

Schwarao commented 1 month ago

@chaosi-zju Can you tell me how to solve this? image

this is secret: image

and this is the deployment about this pod: image

chaosi-zju commented 1 month ago

can you provide the command and its output of how you installed estimator?

Schwarao commented 1 month ago

can you provide the command and its output of how you installed estimator?

image

chaosi-zju commented 1 month ago

why you use karmada-apiserver.config as --member-kubeconfig parameter?

attention:

-C, --cluster='':
        Name of the member cluster that enables or disables the scheduler estimator.

 --member-context='':
        Member cluster's context which to deploy scheduler estimator

--member-kubeconfig='':
    Member cluster's kubeconfig which to deploy scheduler estimator

-C parameter should be the Cluster object name, while --member-kubeconfig should be member cluster kubeconfig.

Schwarao commented 1 month ago

why you use as parameter?karmada-apiserver.config``--member-kubeconfig

attention:

-C, --cluster='':
        Name of the member cluster that enables or disables the scheduler estimator.

 --member-context='':
        Member cluster's context which to deploy scheduler estimator

--member-kubeconfig='':
    Member cluster's kubeconfig which to deploy scheduler estimator

-C parameter should be the object name, while should be member cluster kubeconfig.Cluster``--member-kubeconfig

I changed the command to the following: Karamadactl addons enable Karamada search Karamada scheduler estimator - C aliyun -- member kubconfig~/. kube/config -- context aliyun But it's still the same mistake as above

chaosi-zju commented 1 month ago

try: kubectl get clusters and provide me with output

Schwarao commented 1 month ago

try: and provide me with outputkubectl get clusters

image

image

image

chaosi-zju commented 1 month ago

image

I changed the command to the following: Karamadactl addons enable Karamada search Karamada scheduler estimator - C aliyun -- member kubconfig~/. kube/config -- context aliyun image

你上面尝试了三次,每次敲的命令都不一样,我感觉你每次参数都不太对,没按规范来

我建议你将控制面集群里创建的老的 Deployment/karmada-scheduler-estimator-aliyun 卸载干净(kubectl delete deploy karmada-scheduler-estimator-aliyun -n karmada-system),然后重新执行下面的命令:

karmadactl addons enable karmada-scheduler-estimator --cluster ali --member-kubeconfig ~/. kube/config --member-context aliyun

抱歉,我们原则上不使用中文沟通,但我怕还是描述不清楚导致沟通低效

Schwarao commented 1 month ago

image

I changed the command to the following: Karamadactl addons enable Karamada search Karamada scheduler estimator - C aliyun -- member kubconfig~/. kube/config -- context aliyun image

你上面尝试了三次,每次敲的命令都不一样,我感觉你每次参数都不太对,没按规范来

  • 从图片看,你的集群名叫ali,应该使用 -C ali--cluster ali
  • 你成员集群 kubeconfig 似乎是 ~/. kube/config,应该使用 --member-kubeconfig ~/. kube/config
  • 从前面的聊天记录看,你成员集群的 context 似乎叫 aliyun,应该使用 --member-context aliyun

我建议你将控制面集群里创建的老的 Deployment/karmada-scheduler-estimator-aliyun 卸载干净,然后重新执行下面的命令:

karmadactl addons enable karmada-scheduler-estimator --cluster ali --member-kubeconfig ~/. kube/config --member-context aliyun

抱歉,我们原则上不使用中文沟通,但我怕还是描述不清楚导致沟通低效

it's ok image

chaosi-zju commented 1 month ago

Congratulations~ 👍

XiShanYongYe-Chang commented 1 month ago

It seems that the issue has been answered, let's close it first. /close

karmada-bot commented 1 month ago

@XiShanYongYe-Chang: Closing this issue.

In response to [this](https://github.com/karmada-io/karmada/issues/5207#issuecomment-2244726720): >It seems that the issue has been answered, let's close it first. >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.