Open ReyRen opened 1 year ago
@ReyRen from the debug bundle provided looks like driver pod logs are truncated. Can you get logs from "nvidia-driver-ctr" container within the driver pod. Looks like NVIDIA driver install is not going through. Attaching logs from dmesg also will help.
I am also facing a similar issue. In my case, I want to enable RDMA and disable useHostMofed for Network Operator installation on Openshift:
Apart from the GPU-operator and monitoring pods, all others are stuck in Init state.
1. Quick Debug Information
2. Issue or feature description
When I was attempting to use the GDRDMA feature, I followed the deployment instructions described in GPU-operator. I have already installed the OFED driver on my physical machine (non-containerized form), so I set the parameters "--set driver.rdma.enabled=true --set driver.rdma.useHostMofed=true." But the Driver-daemon pod get error:
Here are the pod status:
4. Information to attach (optional if deemed irrelevant)
kubectl get ds -n OPERATOR_NAMESPACE
Full debug bundle already send to *operator_feedback@nvidia.com**