kyma-project / infrastructure-manager

Apache License 2.0
0 stars 10 forks source link

Multiple worker groups [KIM/feature] #46

Open pbochynski opened 12 months ago

pbochynski commented 12 months ago

Description Enable possibility to create multiple worker groups with different machine types, volume types, node labels, annotations, taints.

See Gardener specs:

Current example shoot from Provisioner:

 workers:
      - cri:
          name: containerd
        name: cpu-worker-0
        machine:
          type: m5.xlarge
          image:
            name: gardenlinux
            version: 1.2.3
          architecture: amd64
        maximum: 1
        minimum: 1
        maxSurge: 1
        maxUnavailable: 0
        volume:
          type: gp2
          size: 50Gi
        zones:
          - eu-central-1a
        systemComponents:
          allow: true
    workersSettings:
      sshAccess:
        enabled: true

Reasons One size doesn't fit all. Many applications require specific nodes for particular services.

kyma-bot commented 10 months ago

This issue or PR has been automatically marked as stale due to the lack of recent activity. Thank you for your contributions.

This bot triages issues and PRs according to the following rules:

You can:

If you think that I work incorrectly, kindly raise an issue with the problem.

/lifecycle stale

kyma-bot commented 9 months ago

This issue or PR has been automatically closed due to the lack of activity. Thank you for your contributions.

This bot triages issues and PRs according to the following rules:

You can:

If you think that I work incorrectly, kindly raise an issue with the problem.

/close

kyma-bot commented 9 months ago

@kyma-bot: Closing this issue.

In response to [this](https://github.com/kyma-project/infrastructure-manager/issues/46#issuecomment-1827920481): >This issue or PR has been automatically closed due to the lack of activity. >Thank you for your contributions. > >This bot triages issues and PRs according to the following rules: >- After 60d of inactivity, `lifecycle/stale` is applied >- After 7d of inactivity since `lifecycle/stale` was applied, the issue is closed > >You can: >- Reopen this issue or PR with `/reopen` >- Mark this issue or PR as fresh with `/remove-lifecycle stale` > >If you think that I work incorrectly, kindly [raise an issue](https://github.com/kyma-project/test-infra/issues/new/choose) with the problem. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
tobiscr commented 9 months ago

@pbochynski : QQ - is this feature still relevant? If yes, I will start the alignment with KEB guys as it needs also their involvment.

pbochynski commented 9 months ago

The issue is part of bigger Epic: https://github.com/kyma-project/kyma/issues/18195

tobiscr commented 6 months ago

We agreed with @varbanv and @PK85 to start with a minimal worker pool configuration, probably similar to the parameter we are currently already providing to Google.

tobiscr commented 2 months ago

JFYI:

It's important to set

    systemComponents:
          allow: true

to ensure the pool-nodes gets a label which indicates the related worker pool. This is important for later scheduling rules (via affinity configurations etc.)