bottlerocket-os / bottlerocket

An operating system designed for hosting containers
https://bottlerocket.dev
Other
8.78k stars 519 forks source link

Node doesn't expose GPU resource on g4dn.[n]xlarge #4087

Open andrescaroc opened 4 months ago

andrescaroc commented 4 months ago

Image I'm using: System Info:

What I expected to happen: 100% of the time that in EKS I start a Bottlerocket OS 1.20.3 (aws-k8s-1.26-nvidia) ami-09469fd78070eaac6 node on a g4dn.[n]xlarge instance-type it should expose the gpu count for pods.

Capacity:
  ...
  nvidia.com/gpu:              1
  ...
Allocatable:
  ...
  nvidia.com/gpu:              1
  ...

What actually happened: ~5% of the time that in EKS I start a Bottlerocket OS 1.20.3 (aws-k8s-1.26-nvidia) ami-09469fd78070eaac6 node on a g4dn.[n]xlarge instance-type it didn't expose the gpu count for pods, causing pods requiring nvidia.com/gpu: 1 to not be scheduled, keeping them in pending state waiting for a node.

Capacity:
  cpu:                8
  ephemeral-storage:  61904460Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             32366612Ki
  pods:               29
Allocatable:
  cpu:                7910m
  ephemeral-storage:  55977408418
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             31676436Ki
  pods:               29

How to reproduce the problem: Note: This issue has existed for more than a year, you can see the slack thread here

Current settings:

- Karpenter EC2NodeClass:

apiVersion: karpenter.k8s.aws/v1beta1 kind: EC2NodeClass metadata: name: random-name spec: amiFamily: Bottlerocket blockDeviceMappings:

Using the session manager -> amin-container -> sheltie

bash-5.1# lsmod | grep nvidia
nvidia_uvm           1454080  0
nvidia_modeset       1265664  0
nvidia              56004608  2 nvidia_uvm,nvidia_modeset
drm                   626688  1 nvidia
backlight              24576  2 drm,nvidia_modeset
i2c_core              102400  2 nvidia,drm

bash-5.1# systemctl list-unit-files | grep nvidia
nvidia-fabricmanager.service                                                          enabled         enabled
nvidia-k8s-device-plugin.service                                                      enabled         enabled

bash-5.1# journalctl -b -u nvidia-k8s-device-plugin
-- No entries --
bash-5.1# journalctl -b -u nvidia-fabricmanager.service
-- No entries --

bash-5.1# journalctl --list-boots
IDX BOOT ID                          FIRST ENTRY                 LAST ENTRY
  0 c25bd0fd67f04681946a64f8c5c57878 Tue 2024-07-09 22:06:03 UTC Fri 2024-07-12 22:02:13 UTC

From the slack thread, someone suggest this:

Grasping at straws, but I wonder if this is some sort of initialization race condition where the kubelet service starts before the NVIDIA device is ready.

arnaldo2792 commented 4 months ago

Thanks for reporting this! By any chance do you have the instance running? It seems odd that the device plugin isn't showing any output.

andrescaroc commented 4 months ago

Yes sir, I have the instance running.

I agree, from the previous time that I reported the incident (slack thread) the output was different:

# journalctl -u nvidia-k8s-device-plugin
Apr 14 06:03:47 ip-192-168-114-245.eu-central-1.compute.internal systemd[1]: Dependency failed for Start NVIDIA kubernetes device plugin.
Apr 14 06:03:47 ip-192-168-114-245.eu-central-1.compute.internal systemd[1]: nvidia-k8s-device-plugin.service: Job nvidia-k8s-device-plugin.service/start failed with result 'dependency'.

But this time it is empty

andrescaroc commented 4 months ago

@arnaldo2792 let me know if there are steps you want to perform to diagnose the issue?

larvacea commented 3 months ago

I am investigating on this end. On the EC2 g4dn.* instance family Bottlerocket may require manual intervention to disable GSP firmware download. This has to happen during boot, before Bottlerocket loads the nvidia kmod. I will find the relevant API to set this as a boot parameter and test the results. Here's the relevant line from nvidia-smi -q:

    GSP Firmware Version                  : 535.183.01

This shows that the nvidia kmod downloaded firmware to the GSP during boot. The desired state is:

    GSP Firmware Version                  : N/A

The slightly better news is that we do have an issue open internally to select the "no GSP download" option on appropriate hardware, without requiring any configuration.

andrescaroc commented 3 months ago

@larvacea I want to thank you for taking the time to investigate this strange issue. Also I am happy that you found some breadcrumbs on what the problem is. :clap:

larvacea commented 3 months ago

Here's one way to set the relevant kernel parameter using apiclient:

apiclient apply <<EOF
[settings.boot.kernel-parameters]
"nvidia.NVreg_EnableGpuFirmware"=["0"]
[settings.boot]
reboot-to-reconcile = true
EOF
apiclient reboot

After the instance reboots, nvidia-smi -q should report N/A for GSP Firmware Version. One can use the same toml fragment as part of instance user data. That's why the toml includes reboot-to-reconcile: this should result in Bottlerocket rebooting automatically whenever the kernel-parameters setting changes the kernel command line. I do not know if this is responsible for the 5% failure rate you see. I'd love to hear if this helps or not.

andrescaroc commented 3 months ago

My understanding is that if that I set the kernel parameter "nvidia.NVreg_EnableGpuFirmware"=["0"] I can be 100% sure that GSP firmware wont be downloaded and that would be enough for my use case where karpenter is in charge of starting and shutting down nodes on demand. (I don't have long living nodes).

Also my understanding is that the parameter reboot-to-reconcile = true will help someone to fix a long living node to set the Firmware parameter. Which is not required in my usecase.

Based on that understanding I would say that my fix would be to add the Firmware parameter in the userData of the karpenter EC2NodeClass as follows:

apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
metadata:
  name: random-name
spec:
  amiFamily: Bottlerocket
  blockDeviceMappings:
  - deviceName: /dev/xvda
    ebs:
      deleteOnTermination: true
      volumeSize: 4Gi
      volumeType: gp3
  - deviceName: /dev/xvdb
    ebs:
      deleteOnTermination: true
      iops: 3000
      snapshotID: snap-d4758cc7f5f11
      throughput: 500
      volumeSize: 60Gi
      volumeType: gp3
  metadataOptions:
    httpEndpoint: enabled
    httpProtocolIPv6: disabled
    httpPutResponseHopLimit: 2
    httpTokens: required
  role: KarpenterNodeRole-prod
  securityGroupSelectorTerms:
  - tags:
      karpenter.sh/discovery: prod
  subnetSelectorTerms:
  - tags:
      Name: '*Private*'
      karpenter.sh/discovery: prod
  tags:
    nodepool: random-name
    purpose: prod
    vendor: random-name
  userData: |-
    [settings.boot.kernel-parameters]
    "nvidia.NVreg_EnableGpuFirmware"=["0"]

However, I don't know the internals of that process and maybe my understanding is wrong and I need to use reboot-to-reconcile setting too.

Please correct me if I am wrong

larvacea commented 3 months ago

The reboot-to-reconcile setting solves an ordering problem in Bottlerocket boot on aws EC2 instances. We can't access user data until the network is available. If anything in user data changes the kernel command line, we need to persist the command line and reboot for the new kernel command line to have any effect. If reboot-to-reconcile is true and the desired kernel command line is different from the one that Bottlerocket booted with, we reboot. On this second boot, the kernel command line does not change, so we will not reboot (and thus will not enter a reboot loop that prevents the instance from starting).

We intend to add logic to automate this and set the desired kmod option before we load the driver. In general-purpose Linux operating systems, one could solve the problem by putting the desired configuration in /etc/modprobe.d. The driver is a loadable kmod, so modprobe will find this configuration file if it exists before the kmod is loaded. On a general-purpose Linux machine, the system administrator has access to /etc, and /etc persists across boots.

In Bottlerocket, /etc is not persisted. It is a memory-resident file system (tmpfs) and built during boot by systemd. One can place the driver configuration in the kernel command line even though the driver is not resident; modprobe reads the command line and adds any configuration it finds to the variables it sourced from /etc/modprobe.d (or possibly a few other locations).

Hope this helps.

xqianwang commented 1 month ago

@andrescaroc Is your karpenter solution working? We are facing similar issues with bottlerocket.