NVIDIA / gpu-operator

NVIDIA GPU Operator creates, configures, and manages GPUs in Kubernetes
https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/index.html
Apache License 2.0
1.84k stars 297 forks source link

Issue with autoscaler scheduling #708

Open Jasper-Ben opened 6 months ago

Jasper-Ben commented 6 months ago

The template below is mostly useful for bug reports and support questions. Feel free to remove anything which doesn't apply to you and add more information where it makes sense.

Important Note: NVIDIA AI Enterprise customers can get support from NVIDIA Enterprise support. Please open a case here.

1. Quick Debug Information

2. Issue or feature description

Briefly explain the issue in terms of expected behavior and current behavior.

We are using gpu-operator for managing GPU drivers on k8s autoscaler managed EC2 instances. The autoscaler is configured to scale up from 0 to 8 Instances.

The issue we are seeing is that starting a single GPU workload will trigger multiple (~4) node scale-ups before marking all but one as unneeded and scaling them back down.

We also attempted the following with no success: https://github.com/NVIDIA/gpu-operator/issues/140#issuecomment-847998871

Currently, our best guess is that since the first upcoming node is already marked as ready even though the GPU operator has not finished setup. Thus, no GPU resource is available which causes our GPU workload Pod to remain in an unschedulable state.

The cluster auto-scheduler will therefore see that the node is ready but the workload pod is still unschedulable, thus triggering an additional scale-up. This process will repeat until the first node completed the GPU setup, providing the requested GPU resource and making the workload pod schedulable.

3. Steps to reproduce the issue

Detailed steps to reproduce the issue.

  1. Autoscaling setup with minimum nodes 0
  2. Start a workload requesting GPU resources
shivamerla commented 6 months ago

The cluster auto-scheduler will therefore see that the node is ready but the workload pod is still unschedulable, thus triggering an additional scale-up. This process will repeat until the first node completed the GPU setup, providing the requested GPU resource and making the workload pod schedulable.

It takes 3-5 minutes for the GPU stack to be ready on the node (driver installation, container-toolkit setup etc), please increase the timeout set with the auto-scalar to mark the node as ready to resolve this.

Jasper-Ben commented 5 months ago

:wave:

It takes 3-5 minutes for the GPU stack to be ready on the node (driver installation, container-toolkit setup etc), please increase the timeout set with the auto-scalar to mark the node as ready to resolve this.

I don't think it is that simple. The cluster-autoscaler will watch for the node.kubernetes.io/not-ready taint to disappear when bringing up a node for scheduling the GPU workload. The problem is that the node is set to ready before the gpu-operator "did it's thing", thus the cluster-autoscaler thinks that the workload should already be assigned to the node, which it is not, since the requested GPU resources cannot be fulfilled.

The "proper" way would probably to have the operator add autoscaler startup taint which the operator can then remove once the GPU stack is ready.

FWIW, we are currently working around this by using kyverno:

  1. We define our ASG with the tag "k8s.io/cluster-autoscaler/node-template/taint/nvidia.com/gpu" = "true:NoSchedule", which will cause the autoscaler to add the specified taint to the node one scale-up trigger
  2. The GPU operator does its thing
  3. We use the following kyverno mutating policy (terraform hcl format) to remove the taint once the GPUs are available (i.e. nvidia.com/gpu.count label exists):
resource "kubernetes_manifest" "clusterpolicy_untaint_node_when_gpu_ready" {
  manifest = {
    "apiVersion" = "kyverno.io/v1"
    "kind" = "ClusterPolicy"
    "metadata" = {
      "name" = "untaint-node-when-gpu-ready"
    }
    "spec" = {
      "background" = false
      "rules" = [
        {
          "context" = [
            {
              "name" = "newtaints"
              "variable" = {
                "jmesPath" = "request.object.spec.taints[?key!='ignore-taint.cluster-autoscaler.kubernetes.io/gpu-node-ready']"
              }
            },
          ]
          "match" = {
            "any" = [
              {
                "operations" = [
                  "CREATE",
                  "UPDATE",
                ]
                "resources" = {
                  "kinds" = [
                    "Node",
                  ],
                  "selector" = {
                    "matchExpressions" = [
                      { "key" = "nvidia.com/gpu.count", "operator" = "Exists", "values" = [] }
                    ]
                  }
                }
              },
            ]
          }
          "mutate" = {
            "patchesJson6902" = <<-EOT
            - path: /spec/taints
              op: replace
              value: {{ newtaints }}
            EOT
          }
          "name" = "remove-taint-when-gpu-ready"
        },
      ]
    }
  }
}