Open pbochynski opened 5 months ago
Open questions
Impacts
Feedback from stakeholders:
@ebensom :
gVisor
support for our default worker pool (used by Kyma workload).@varbanv :
Currently supported worker parameter in RuntimeCR
: https://github.com/kyma-project/infrastructure-manager/blob/main/config/samples/infrastructuremanager_v1_runtime.yaml#L56
Next steps / Action items:
In general, I see a bigger demand for GPUs explicitly requested by different teams, some about AI, others for ML algorithms. The scope, in any case, is always to have dedicated nodes to run specific tasks.
Reasonable also to include m6g and m6in (or the current available generation) for SAP for Me One note on g5 and r7i this is required for SAP Intelligent Product Recommendation
@tobiscr 1) @PK85 + @ebensom : decide on the configruation options we are exposing for customers and track it in this issue
We will go simple on KEB side. We will keep those (mandatory) parameters on root for system node pool(We will adjust descriptions):
"autoScalerMax": ...
"autoScalerMin": ..
"machineType": ..
NOTE: this is always HA min 3 nodes. We need to decide how to name that worker node pool, probably we use some name right now.
and new (optional )array of worker nodes for customer usage:
additionalWorkerNodePools [
"autoScalerMax": ...
"autoScalerMin": ..
"machineType": ..
]
NOTE: for now same validation as for system ones, thta means HA is mandatory.
About machineTypes we keep what we have for now, not extending that. Reason is that we first need to focus to run Kyma modules only in the system worker node pool. And second reason is that existing KMC will work without changing anything.
Later when we will release that and see that everything works we can add new machine Types including GPU etc, that requires to adjust billing etc.
Cheers, PK
Hello, my name is Christoph, i am project manager for Ingentis and we are using kyma running on SAP BTP. (currently running 10 clusters in 4 different landscapes). We are also looking forward to having different node pools in kyma, with the following use case:
We have some workloads, that require a very high amount of memory in a single operation. The requirements can go up to 128 GB of RAM. Of course we do not want to run all nodes of our cluster with 128 GB machines, cause this would be very expensive. The operations itself can not be optmized with low effort (We are generating large export files for power point and PDF and the third party libraries we are using for this, do not support streamed or chunked exports, they require to hold all in memory).
So for us it would be important to have system node pool with small machines (like 16 GB or 32 GB) and than an additional node pool for the heavy workloads (like 128 GB machines). It would be important for us to be able to scale down the additional node pool to zero, cause we only need the expensive machines in case there are heavy workloads. So in the moment a user queues in a heavy workload, we would spawn a pod on the additional node pool, the node pool should scale up, executes the workload (which typically needs some hours) and then scale down to zero, after the workloads are done.
We do not require to have new machine types, like GPU or ARM machines.
I hope this is a state we can reach at some point. As I understand, it's currently planned to release additional node pools with HA , so they have to have at least 3 nodes permanently, without the option to scale to zero?
Kind regards, Christoph
Hi @ChristophRothmeier , thanks for your request.
The multiple worker pool feature is currently in implementation and will be rolled out till end of this year. The list of supported machine types is at the beginning not extended and includes the same machine types as we offer when creating a new Kyma runtime via BTP cockpit. But support for additional machine types is already agreed and will be added soon after the worker pool feature is productive.
For go-live, we will also offer only worker pools with HA support (means, 3 nodes are the minimum). Scaling to 0 nodes is with a HA-supporting worker pool not possible but can be achieved by dropping the worker pool and re-creating it afterwards.
We are already in discussions to allow non-HA supporting worker pools with < 3 nodes. Such pools would also allow a scaling to 0 nodes.
Hi Tobias,
thanks for the response. for us it would be huge, to have the ability to scale down additional worker pools to zero with non-HA support. Could you send an update in this issue as soon your discussions about this topic have progressed and it is clear if and when it will be implemented?
Thanks Christoph
This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. Thank you for your contributions.
Description Kyma clusters should support multiple machine types simultaneously. For example GPU and ARM nodes, network, memory and CPU optimized nodes, etc.
Acceptance criteria:
Reasons Our customers demand ARM and GPU nodes in Kyma clusters to run their workload on the architecture supporting their use cases. Examples:
Related issues
18195