NVIDIA / k8s-device-plugin

NVIDIA device plugin for Kubernetes
Apache License 2.0
2.83k stars 629 forks source link

Advertising specific GPU types as separate extended resource #424

Open deepanker-s opened 1 year ago

deepanker-s commented 1 year ago

Hello, I am working at Uber.

1. Feature description

Advertising special hardware (specific GPU types say A100) as a separate extended resource.

As of now, we have a blanket of "nvidia.com/gpu" for all types of GPUs that this plugin supports. If we want our pods to run specifically on some GPU types, then, we need to be able to request such a resource.

For requesting such a specific resource, there are 2 ways -

  1. [Existing] Using nodeLabels/nodeSelectors
  2. [New] Advertising the same directly as a new resource such as "nvidia.com/gpu-A100-...."

This added functionality can be enabled based upon a configuration flag and can use gpu-feature-discovery labels to extract the SKU/GPU type.

2. Why

  1. There is already a similar resource advertising being done for MIG enabled devices -

    nvidia.com/gpu
    nvidia.com/mig-1g.5gb
    nvidia.com/mig-2g.10gb
    nvidia.com/mig-3g.20gb
    nvidia.com/mig-7g.40gb
  2. Another reason is that usage of nodeLabels/nodeSelectors may not be possible due to some limitations.

3. Similar existing work

I found a design doc for "Custom Resource Naming and Supporting Multiple GPU SKUs on a Single Node in Kubernetes".

It is actually advertising different types of GPUs as new resource name but those different GPU cards should be on the same node. I am not sure whether the same will also support if the corresponding GPU cards/types are on different nodes as well.

4. Summary of queries

  1. Is the above feature request already being supported by the above mentioned "Similar existing work"?
  2. If yes, when will that work be approved and available?
klueska commented 1 year ago

There is still no planned support for this in the k8s-device plugin. All of the functionality is there (as described in the link you provided), but it is explicitly disabled by this line in the code https://github.com/NVIDIA/k8s-device-plugin/blob/main/cmd/nvidia-device-plugin/main.go#L322

The future for supporting multiple GPU cards per node is via a new mechanism in Kubernetes called Dynamic Resource Allocation (DRA): https://docs.google.com/document/d/1BNWqgx_SmZDi-va_V31v3DnuVwYnF2EmN7D-O_fB6Oo/edit https://github.com/NVIDIA/k8s-dra-driver

deepanker-s commented 1 year ago

Hey Kevin, Thanks for the info.

I was actually asking about specific GPU resource naming for GPUs on different nodes (not on same node). But looks like, the answer seems to be the same. DRA can help achieve that as well.

deepanker-s commented 1 year ago

Hey Kevin, I understand now that DRA can be used to specify GPU types (A100, H100) for different pods using "GpuClaimParameters".

Is there any functionality to advertise these specified resources/resourceClaims?

Example - Using DRA "GpuClaimParameters") (as in gpu-test6 example), if -

Will device plugin advertise the resource usage details - how many A100 devices are being used? Like currently we adverstise - nvidia.com/gpu : 10

Will it provide details such as below in any manner? nvidia.com/gpu-A100 : 5

dimm0 commented 1 year ago

We're looking to install the yunikorn scheduler on the cluster and having different resources for different GPUs will help a lot to prioritize the use of more powerful (and less available) GPUs among users using the fair share. It's impossible to do with just labels.

sjdrc commented 1 year ago

There is still no planned support for this in the k8s-device plugin

Is there a reason why this isn't planned to be implemented here? This seems like an essential feature for any cluster with more than 1 model of GPU and there's currently no adequate workaround at all.

klueska commented 1 year ago

It was a product decision, not an engineering one.

All of the code to support it is merged in the plugin and simply disabled by https://github.com/NVIDIA/k8s-device-plugin/blob/main/cmd/nvidia-device-plugin/main.go#L239.

The decision not to support this gets revisited periodically, but our product team is still not in favor of it, so our hands are tied.

If you want to enable it in a custom build of the plugin, just remove that line referenced above and it should work as described in https://docs.google.com/document/d/1dL67t9IqKC2-xqonMi6DV7W2YNZdkmfX7ibB6Jb-qmk/edit#heading=h.jw5js7865egx.

yuzliu commented 1 year ago

@klueska thanks for the explanation. We also explored the extended resource options and we even have a component we wrote ourselves to patch node with gpu extended resources. Just curious would you be open to add a flag to turn this feature on/off so we don't have to deploy a customized version of nvidia device plugin?

klueska commented 1 year ago

@yuzliu Do you have multiple GPU types per node? If not, are node-labels from GFD / nodeSelectors not enough for your use case?

yuzliu commented 1 year ago

@klueska Thanks for the reply! We don't have multiple GPU types per node but we do have multiple GPU types per cluster. We have already deployed the GPU feature discovery and have gpu product label on each GPU node but that doesn't solve our problem because:

  1. We have clusters having multiple GPU types e.g. A100 + T4 mixed in one cluster
  2. We have resourcequota on each namespace and we want to achieve resource quota enforcement e.g. namespace A can only use 1 A100 and 5 T4 on a namespace level.
  3. We want to collect metrics accurately per each GPU type. For example we'd like to know namespace A has 4 A100 available and 1 A100 was requested but still have 3 left.
klueska commented 1 year ago

Got it -- labels from GPU feature discovery are sufficient for 1, but not for 2 and 3 -- for that you need a unique extended resources.

yuzliu commented 1 year ago

Yep, we even have an internal component to advertise extended resources e.g. V100, A100 and T4. But I'd really love to have less customized logic internally but rely on Nvidia's official component to make our long term maintenance easier.

github-actions[bot] commented 8 months ago

This issue has become stale and will be closed automatically within 30 days if no activity is recorded.

leoncamel commented 7 months ago

Any progress on this issue?

ZDWWWWW commented 5 months ago

Any progress on this issue?