ROCm / k8s-device-plugin

Kubernetes (k8s) device plugin to enable registration of AMD GPU to a container cluster
Apache License 2.0
276 stars 48 forks source link

[Feature]: Support detection, allocation and resetting of GPU partitions in CDNA cards #54

Open lohbe opened 8 months ago

lohbe commented 8 months ago

Suggestion Description

This is more of a question at this point. CDNA3 MI300x supports up to 8 x partitions per card via SR-IOV. Can k8s-device-plugin

Operating System

No response

GPU

CDNA, MI300x

ROCm Component

k8s-device-plugin

boniek83 commented 2 months ago

We have a couple of AS -8125GS-TNMR2 machines with mi300x and suffer greatly due to this as well. Here's NVidia's documentation of this topic https://docs.nvidia.com/datacenter/cloud-native/kubernetes/latest/index.html It would be great to have similar functionality (especially including allocation strategies) available with AMD hardware under kubernetes.

The only major thing lacking in NVidia's implementation is allocation of MIG instances on demand - they are all statically allocated which is a serious PITA and not elastic at all. They should be created when requested (eg nvidia.com/mig-1g.5gb: 1) and destroyed when pod is done and when nvidia.com/gpu: 1 is requested full gpu should be attached to a pod and this should be possible all at the same time (of course nvidia.com/mig-1g.5gb and nvidia.com/gpu should be completely different physical gpus if requested at the same time). This would/might create scheduling issues (fragmentation) but nevertheless should be available as an option as this has potential to better utilize available resources and doesn't require administrator to be omniscient when statically allocating MIGs.

boniek83 commented 2 months ago

Very interesting deep dive about what is and what is going to be possible on NVidia hardware with kubernetes: https://www.youtube.com/watch?app=desktop&v=qDfFL78QcnQ It seems like CDI is the key to the future.