Open linsistqb opened 2 days ago
一、环境 hami版本: hami-2.4.1 projecthami/hami:v2.4.1 k8s with docker, nvidia as runtime版本: v1.26.15 docker: Server Version: 20.10.24 nvidia-Driver Version: 550.67 OS: Ubuntu 22.04 LTS linux kernel: 5.15.0-112-generic
二、日志及问题描述 遇见相同的问题 "register.go:148] "failed to get numa information" err="exit status 255" idx=1" 在报错后, pod中的任务就失败了,nvidia-smi也不能起来了; "Failed to initialize NVML: Unknown Error" 但是重新创建的pod可以运行:
hami-device-plugin log: 1129 08:29:58.044268 639047 register.go:197] Successfully registered annotation. Next check in 30s seconds... 2024-11-29T08:30:28.046041214Z I1129 08:30:28.045863 639047 register.go:132] MemoryScaling= 1 registeredmem= 40960 I1129 08:30:28.378554 639047 register.go:160] nvml registered device id=1, memory=40960, type=NVIDIA A100-SXM4-40GB, numa=0 I1129 08:30:28.378693 639047 register.go:132] MemoryScaling= 1 registeredmem= 40960 I1129 08:30:28.642736 639047 register.go:160] nvml registered device id=2, memory=40960, type=NVIDIA A100-PCIE-40GB, numa=0 2024-11-29T08:30:28.642904839Z I1129 08:30:28.642802 639047 register.go:167] "start working on the devices" devices=[{"id":"GPU-226a1bca-6776-4d34-c118-1480705d24f4","count":10,"devmem":40960,"devcore":100,"type":"NVIDIA-NVIDIA A100-SXM4-40GB","health":true},{"id":"GPU-c96ffde7-75ee-2f7e-255b-d34a594c752b","count":10,"devmem":40960,"devcore":100,"type":"NVIDIA-NVIDIA A100-PCIE-40GB","health":true}] I1129 08:30:28.648553 639047 util.go:163] Encoded node Devices: GPU-226a1bca-6776-4d34-c118-1480705d24f4,10,40960,100,NVIDIA-NVIDIA A100-SXM4-40GB,0,true:GPU-c96ffde7-75ee-2f7e-255b-d34a594c752b,10,40960,100,NVIDIA-NVIDIA A100-PCIE-40GB,0,true: I1129 08:30:28.648624 639047 register.go:177] patch node with the following annos map[hami.io/node-handshake:Reported 2024-11-29 08:30:28.648575277 +0000 UTC m=+16711.862703545 hami.io/node-nvidia-register:GPU-226a1bca-6776-4d34-c118-1480705d24f4,10,40960,100,NVIDIA-NVIDIA A100-SXM4-40GB,0,true:GPU-c96ffde7-75ee-2f7e-255b-d34a594c752b,10,40960,100,NVIDIA-NVIDIA A100-PCIE-40GB,0,true:] 2024-11-29T08:30:28.668990035Z I1129 08:30:28.668854 639047 register.go:197] Successfully registered annotation. Next check in 30s seconds... I1129 08:30:58.680655 639047 register.go:132] MemoryScaling= 1 registeredmem= 40960 I1129 08:30:58.997658 639047 register.go:160] nvml registered device id=1, memory=40960, type=NVIDIA A100-SXM4-40GB, numa=0 I1129 08:30:58.997798 639047 register.go:132] MemoryScaling= 1 registeredmem= 40960 I1129 08:30:59.293295 639047 register.go:160] nvml registered device id=2, memory=40960, type=NVIDIA A100-PCIE-40GB, numa=0 I1129 08:30:59.293366 639047 register.go:167] "start working on the devices" devices=[{"id":"GPU-226a1bca-6776-4d34-c118-1480705d24f4","count":10,"devmem":40960,"devcore":100,"type":"NVIDIA-NVIDIA A100-SXM4-40GB","health":true},{"id":"GPU-c96ffde7-75ee-2f7e-255b-d34a594c752b","count":10,"devmem":40960,"devcore":100,"type":"NVIDIA-NVIDIA A100-PCIE-40GB","health":true}] I1129 08:30:59.297889 639047 util.go:163] Encoded node Devices: GPU-226a1bca-6776-4d34-c118-1480705d24f4,10,40960,100,NVIDIA-NVIDIA A100-SXM4-40GB,0,true:GPU-c96ffde7-75ee-2f7e-255b-d34a594c752b,10,40960,100,NVIDIA-NVIDIA A100-PCIE-40GB,0,true: I1129 08:30:59.297915 639047 register.go:177] patch node with the following annos map[hami.io/node-handshake:Reported 2024-11-29 08:30:59.297898796 +0000 UTC m=+16742.512026954 hami.io/node-nvidia-register:GPU-226a1bca-6776-4d34-c118-1480705d24f4,10,40960,100,NVIDIA-NVIDIA A100-SXM4-40GB,0,true:GPU-c96ffde7-75ee-2f7e-255b-d34a594c752b,10,40960,100,NVIDIA-NVIDIA A100-PCIE-40GB,0,true:] I1129 08:30:59.316088 639047 register.go:197] Successfully registered annotation. Next check in 30s seconds... I1129 08:31:29.316706 639047 register.go:132] MemoryScaling= 1 registeredmem= 40960 E1129 08:31:29.354091 639047 register.go:148] "failed to get numa information" err="exit status 255" idx=0 I1129 08:31:29.354133 639047 register.go:160] nvml registered device id=1, memory=40960, type=NVIDIA A100-SXM4-40GB, numa=0 I1129 08:31:29.354246 639047 register.go:132] MemoryScaling= 1 registeredmem= 40960 E1129 08:31:29.374291 639047 register.go:148] "failed to get numa information" err="exit status 255" idx=1 2024-11-29T08:31:29.374424073Z I1129 08:31:29.374348 639047 register.go:160] nvml registered device id=2, memory=40960, type=NVIDIA A100-PCIE-40GB, numa=0 I1129 08:31:29.374401 639047 register.go:167] "start working on the devices" devices=[{"id":"GPU-226a1bca-6776-4d34-c118-1480705d24f4","count":10,"devmem":40960,"devcore":100,"type":"NVIDIA-NVIDIA A100-SXM4-40GB","health":true},{"id":"GPU-c96ffde7-75ee-2f7e-255b-d34a594c752b","count":10,"devmem":40960,"devcore":100,"type":"NVIDIA-NVIDIA A100-PCIE-40GB","health":true}] 2024-11-29T08:31:29.380361313Z I1129 08:31:29.380256 639047 util.go:163] Encoded node Devices: GPU-226a1bca-6776-4d34-c118-1480705d24f4,10,40960,100,NVIDIA-NVIDIA A100-SXM4-40GB,0,true:GPU-c96ffde7-75ee-2f7e-255b-d34a594c752b,10,40960,100,NVIDIA-NVIDIA A100-PCIE-40GB,0,true: I1129 08:31:29.380347 639047 register.go:177] patch node with the following annos map[hami.io/node-handshake:Reported 2024-11-29 08:31:29.380276663 +0000 UTC m=+16772.594404923 hami.io/node-nvidia-register:GPU-226a1bca-6776-4d34-c118-1480705d24f4,10,40960,100,NVIDIA-NVIDIA A100-SXM4-40GB,0,true:GPU-c96ffde7-75ee-2f7e-255b-d34a594c752b,10,40960,100,NVIDIA-NVIDIA A100-PCIE-40GB,0,true:] I1129 08:31:29.399362 639047 register.go:197] Successfully registered annotation. Next check in 30s seconds...
`
一、环境 hami版本 2.4.1 和 2.3.12 k8s 版本 1.22.12 docker方式 二、日志及问题描述 1.hami-device-plugin device-plugin日志如下 "failed to get numa information" err="exit status 255" idx=0 nvml registered device id=1, memory=15360, type=Tesla T4, numa=0 2.正常时候pod里面nvidia-smi是正常的,一段时间之后pod里面再执行nvidia-smi 报错 Failed to initialize NVML: Unknown Error 3.有两个集群都是这种情况,几乎是必现