easzlab / kubeasz

使用Ansible脚本安装K8S集群,介绍组件交互原理,方便直接,不受国内网络环境影响
https://github.com/easzlab/kubeasz
10.53k stars 3.53k forks source link

删除节点怎么都在localhost执行,应该到目标集群 #1345

Closed 13567436138 closed 9 months ago

13567436138 commented 10 months ago

What happened? 发生了什么问题?

TASK [run kubectl drain @] ****************************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/etc/kubeasz/bin/kubectl drain  --delete-emptydir-data --ignore-daemonsets --force", "delta": "0:00:00.068968", "end": "2024-01-15 15:41:33.650722", "msg": "non-zero return code", "rc": 1, "start": "2024-01-15 15:41:33.581754", "stderr": "error: USAGE: drain NODE [flags]\nSee 'kubectl drain -h' for help and examples", "stderr_lines": ["error: USAGE: drain NODE [flags]", "See 'kubectl drain -h' for help and examples"], "stdout": "", "stdout_lines": []}

PLAY RECAP ********************************************************************************************
localhost                  : ok=6    changed=2    unreachable=0    failed=1    skipped=2    rescued=0    ignored=0   

root@karmada-01-a:/etc/kubeasz/playbooks# vi 32.delnode.yml 
root@karmada-01-a:/etc/kubeasz/playbooks# cat 32.delnode.yml
# WARNNING:  this playbook will clean the node {{ NODE_TO_DEL }}

- hosts: localhost 
  tasks:
  - fail: msg="you CAN NOT delete the last member of kube_master!"
    when: "groups['kube_master']|length < 2 and NODE_TO_DEL in groups['kube_master']"

  - name: 注册变量 K8S 主版本
    shell: echo {{ K8S_VER }}|awk -F. '{print $1"."$2}'
    register: K8S_VER_MAIN

  - name: 设置kubectl drain 参数
    set_fact: DRAIN_OPT="--delete-emptydir-data --ignore-daemonsets --force"
    when: "K8S_VER_MAIN.stdout|float > 1.19"

  - name: 设置kubectl drain 参数
    set_fact: DRAIN_OPT="--delete-local-data --ignore-daemonsets --force"
    when: "K8S_VER_MAIN.stdout|float < 1.20"

  - name: debug info
    debug: var="DRAIN_OPT"

  - name: get the node name to delete
    shell: "{{ base_dir }}/bin/kubectl get node -owide|grep ' {{ NODE_TO_DEL }} '|awk '{print $1}'"
    register: NODE_NAME

  - debug: var="NODE_NAME"

  - name: run kubectl drain @{{ NODE_NAME.stdout }}
    shell: "{{ base_dir }}/bin/kubectl drain {{ NODE_NAME.stdout }} {{ DRAIN_OPT }}"
    #ignore_errors: true

  - name: clean node {{ NODE_TO_DEL }}
    shell: "cd {{ base_dir }} && ansible-playbook -i clusters/{{ CLUSTER }}/hosts \
              roles/clean/clean_node.yml \
              -e NODE_TO_CLEAN={{ NODE_TO_DEL }} \
              -e DEL_NODE=yes \
              -e DEL_LB=yes >> /tmp/ansible-`date +'%Y%m%d%H%M%S'`.log 2>&1 \
            || echo 'data not cleaned on {{ NODE_TO_DEL }}'"
    register: CLEAN_STATUS

  - debug: var="CLEAN_STATUS"

  - name: run kubectl delete node {{ NODE_NAME.stdout }}
    shell: "{{ base_dir }}/bin/kubectl delete node {{ NODE_NAME.stdout }}"
    ignore_errors: true

  # lineinfile is inadequate to delete lines between some specific line range
  - name: remove the node's entry in hosts
    shell: 'sed -i "/^\[kube_node/,/^\[harbor/ {/^{{ NODE_TO_DEL }}$/d}" {{ base_dir }}/clusters/{{ CLUSTER }}/hosts'

  # lineinfile is inadequate to delete lines between some specific line range
  - name: remove the node's entry in hosts
    shell: 'sed -i "/^\[kube_node/,/^\[harbor/ {/^{{ NODE_TO_DEL }} /d}" {{ base_dir }}/clusters/{{ CLUSTER }}/hosts'

What did you expect to happen? 期望的结果是什么?

删除成功

How can we reproduce it (as minimally and precisely as possible)? 尽可能最小化、精确地描述如何复现问题

删除命令docker运行失败

Anything else we need to know? 其他需要说明的情况

No response

Kubernetes version k8s 版本

1.28.1

Kubeasz version

111

OS version 操作系统版本

```console # On Linux: $ cat /etc/os-release # paste output here $ uname -a # paste output here ``` 11

Related plugins (CNI, CSI, ...) and versions (if applicable) 其他网络插件等需要说明的情况

github-actions[bot] commented 9 months ago

This issue is stale because it has been open for 30 days with no activity.

github-actions[bot] commented 9 months ago

This issue was closed because it has been inactive for 14 days since being marked as stale.