NVIDIA / kubevirt-gpu-device-plugin

NVIDIA k8s device plugin for Kubevirt
BSD 3-Clause "New" or "Revised" License
208 stars 66 forks source link

GPU device plugin interval health check #97

Open nadav213000 opened 3 months ago

nadav213000 commented 3 months ago

Hey,

We use NVIDIA GPU Operator on OpenShift to expose Passthrough GPU with KubeVirt.

Issue

We experienced an issue when one of the GPUs on the Node became unavailable, but the Node didn't change the reported GPU capacity or Alloctable resources. The GPU itself wasn't available and when I tried to create a new VM it reached crashLoopBack state until the GPU became available again.

Only after I restarted the pod nvidia-sandbox-device-plugin-daemonset on the specific Node the number of Alloctable and Capacity GPUs changed to the right number.

I checked the pods on this Nodes:

It looks like the pods run an initial healthCheck, and then don't run them again. Is there a way to make the Operator pods validate the health of the GPUs on an interval, so the resources available on the Node will be reflected correctly?

How to reproduce

I reproduced the issue by logically removing one of the GPU PCI devices from the node using the command:

echo "1" > /sys/bus/pci/devices/<gpu_pci_id>/remove

and validated the GPU is no longer visible from the host using lspci.

Then, using oc describe <node> the number of GPUs exposed didn't change. After restarting the sandbox pod, the number of GPUs was updated to the right number.

To re-add the GPU you can run the command:

echo "1" > /sys/bus/pci/rescan

and restart the sandbox pod again

Versions

rthallisey commented 3 months ago

Thanks for the feature request.

GPU health checks are an important feature that we'd love to have yesterday. However, the trouble is finding the most effective way to solve the problem, so that we're correctly detecting failures and remediating. The areas we're investigating are fault-tolerant scheduling, so that we avoid problematic GPUs and identifying proper remediation steps so that users aren't impacted.

I'll follow up on this issue when we've aligned on a solution.

@cdesiniotis cc

doronkg commented 5 days ago

Hey @rthallisey, I'm writing here on behalf of my colleague @nadav213000. It seems as the resolution to this issue was brought in #105, and is introduced in v.1.2.8, correct?

visheshtanksale commented 5 days ago

Hey @rthallisey, I'm writing here on behalf of my colleague @nadav213000. It seems as the resolution to this issue was brought in #105, and is introduced in v.1.2.8, correct?

Yes, https://github.com/NVIDIA/kubevirt-gpu-device-plugin/pull/105 resolves the scenarios that you have mentioned here and is released with v1.2.8.