rancher / rke

Rancher Kubernetes Engine (RKE), an extremely simple, lightning fast Kubernetes distribution that runs entirely within containers.
Apache License 2.0
3.22k stars 582 forks source link

Set kubelet extra_args between difference hosts #3207

Closed shellsuperhu closed 10 months ago

shellsuperhu commented 1 year ago

RKE version: rke version v1.3.14

Docker version: (docker version,docker info preferred) Client: Docker Engine - Community Version: 20.10.7 API version: 1.41 Go version: go1.13.15 Git commit: f0df350 Built: Wed Jun 2 11:58:10 2021 OS/Arch: linux/amd64 Context: default Experimental: true

Server: Docker Engine - Community Engine: Version: 20.10.7 API version: 1.41 (minimum version 1.12) Go version: go1.13.15 Git commit: b0f5bc3 Built: Wed Jun 2 11:56:35 2021 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.4.6 GitCommit: d71fcd7d8303cbf684402823e425e9dd2e99285d runc: Version: 1.0.0-rc95 GitCommit: b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 docker-init: Version: 0.19.0 GitCommit: de40ad0

Operating system and kernel: (cat /etc/os-release, uname -r preferred) NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7"

Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO) Exsi

cluster.yml file:

nodes:
  - address: 192.168.100.41
    internal_address: 192.168.100.41
    user: admin
    role: [controlplane, etcd]
  - address: 192.168.100.42
    internal_address: 192.168.100.42
    user: admin
    role: [controlplane, worker, etcd]
  - address: 192.168.100.43
    internal_address: 192.168.100.43
    user: admin
    role: [controlplane, etcd]
  - address: 192.168.100.48
    internal_address: 192.168.100.48
    user: admin
    role: [worker]

cluster_name: kube-test

kubernetes_version: v1.24.4-rancher1-1

services:
  etcd:
    backup_config:
      interval_hours: 6
      retention: 30
  kube-api:
    service_cluster_ip_range: 10.41.0.0/16
    extra_args:
      feature-gates: MaxUnavailableStatefulSet=true  
  kube-controller:
    cluster_cidr: 10.40.0.0/16
    service_cluster_ip_range: 10.41.0.0/16
    extra_args:
      feature-gates: MaxUnavailableStatefulSet=true
  kubelet:
    cluster_dns_server: 10.41.0.10
    extra_args:
      enforce-node-allocatable: "pods,kube-reserved,system-reserved"
      kube-reserved: "cpu=1,memory=1Gi,ephemeral-storage=1Gi"
      system-reserved: "cpu=500m,memory=1Gi,ephemeral-storage=1Gi"
      kube-reserved-cgroup: /kube.slice
      system-reserved-cgroup: /system.slice
      eviction-hard: "memory.available<500Mi,imagefs.available<10%,nodefs.available<10%,nodefs.inodesFree<5%"

Steps to Reproduce: How can i set kubelet extra_args between difference hosts, like

192.168.100.41:
extra_args:
   kube-reserved: "cpu=1,memory=1Gi,ephemeral-storage=1Gi"

192.168.100.42:
extra_args:
   kube-reserved: "cpu=4,memory=8Gi,ephemeral-storage=3Gi"

Results: rke up both of hosts, in the same kube-reserved.

shellsuperhu commented 1 year ago

1184

  1. Should be able to define different reserve values for different kubelet hosts with different capacities of resources. Usually the node hosts have more resources than the master hosts.

now, is rke implemented ?

kinarashah commented 1 year ago

@shellsuperhu We don’t have a way to do this in RKE at the moment, I’m taking a look to see what would be our best option here. Thanks for opening the issue!

github-actions[bot] commented 1 year ago

This repository uses an automated workflow to automatically label issues which have not had any activity (commit/comment/label) for 60 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the workflow can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the workflow will automatically close the issue in 14 days. Thank you for your contributions.

kinarashah commented 1 year ago

Adding comment for the bot

github-actions[bot] commented 1 year ago

This repository uses an automated workflow to automatically label issues which have not had any activity (commit/comment/label) for 60 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the workflow can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the workflow will automatically close the issue in 14 days. Thank you for your contributions.

frankbou commented 1 year ago

Adding comment for the bot.

We have several nodepools with different VM sizing. It would probably make sense to be able to speficy different kube-reserved attributes (e.g. : one per nodepool).

github-actions[bot] commented 1 year ago

This repository uses an automated workflow to automatically label issues which have not had any activity (commit/comment/label) for 60 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the workflow can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the workflow will automatically close the issue in 14 days. Thank you for your contributions.

frankbou commented 1 year ago

Adding comment for the bot

github-actions[bot] commented 11 months ago

This repository uses an automated workflow to automatically label issues which have not had any activity (commit/comment/label) for 60 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the workflow can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the workflow will automatically close the issue in 14 days. Thank you for your contributions.