volcano-sh / devices

Device plugins for Volcano, e.g. GPU
Apache License 2.0
97 stars 41 forks source link

使用gpu-number 导致 schduler crash #49

Closed oldthreefeng closed 10 months ago

oldthreefeng commented 1 year ago
apiVersion: batch.volcano.sh/v1alpha1
kind: Job
metadata:
  name: ocr-job 
spec:
  minAvailable: 1
  schedulerName: volcano
  queue: default 
  policies:
    - event: PodEvicted
      action: RestartJob
  tasks:
    - replicas: 1
      name: ocr
      policies:
      - event: TaskCompleted
        action: CompleteJob
      template:
        spec:
          containers:
            - image: ai-grpc-ocr:v1.4 
              name: ocr
              resources:
                requests:
                  volcano.sh/gpu-number: 1
                  #nvidia.com/gpu: 1 
                limits:
                  volcano.sh/gpu-number: 1
                  #nvidia.com/gpu: 1 
          restartPolicy: Never
    - replicas: 1
      name: ocr-2
      policies:
      - event: TaskCompleted
        action: CompleteJob
      template:
        spec:
          containers:
            - image: ai-grpc-ocr:v1.4
              name: ocr
              resources:
                requests:
                  volcano.sh/gpu-number: 1
                  #nvidia.com/gpu: 1 
                limits:
                  volcano.sh/gpu-number: 1
                  #nvidia.com/gpu: 1 
          restartPolicy: Never

log

$ k get no 
NAME          STATUS   ROLES    AGE    VERSION
10.122.2.14   Ready    <none>   42d    v1.26.1
10.122.2.26   Ready    <none>   154m   v1.26.1
10.122.2.37   Ready    <none>   44m    v1.26.1
$ k get po                                      
NAME                                   READY   STATUS             RESTARTS      AGE
ocr-job-ocr-0                          0/1     Pending            0             4m33s
ocr-job-ocr-2-0                        0/1     Pending            0             4m33s
volcano-admission-7f76fc8cf4-rcp85     1/1     Running            0             35d
volcano-admission-init-785w6           0/1     Completed          0             35d
volcano-controllers-6875c95bd7-zs49k   1/1     Running            0             35d
volcano-scheduler-6dcf84d54d-gcwxm     0/1     CrashLoopBackOff   9 (80s ago)   58m

$ k get po       
NAME                                   READY   STATUS             RESTARTS      AGE
ocr-job-ocr-0                          0/1     Pending            0             41s
ocr-job-ocr-2-0                        0/1     Pending            0             41s
volcano-admission-7f76fc8cf4-rcp85     1/1     Running            0             35d
volcano-admission-init-785w6           0/1     Completed          0             35d
volcano-controllers-6875c95bd7-zs49k   1/1     Running            0             35d
volcano-scheduler-6dcf84d54d-zg4d2     0/1     CrashLoopBackOff   2 (25s ago)   4m20s
 I0816 12:30:34.061174       1 allocate.go:180] There are <3> nodes for Job <volcano-system/ocr-job-11507a57-1b68-46ad-83bf-38e0c2d76f99>
I0816 12:30:34.061251       1 predicate_helper.go:74] Predicates failed for task <volcano-system/ocr-job-ocr-0> on node <10.122.2.14>: task volcano-system/ocr-job-ocr-0 on node 10.122.2.14 fit failed: Insufficient volcano.sh/gpu-number
E0816 12:30:34.061385       1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 334 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1caa960?, 0x32ac650})
    /go/src/volcano.sh/volcano/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00102cf70?})
    /go/src/volcano.sh/volcano/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75
panic({0x1caa960, 0x32ac650})
    /usr/local/go/src/runtime/panic.go:884 +0x212
volcano.sh/volcano/pkg/scheduler/api/devices/nvidia/gpushare.getDevicesIdleGPUs(...)
    /go/src/volcano.sh/volcano/pkg/scheduler/api/devices/nvidia/gpushare/share.go:64
volcano.sh/volcano/pkg/scheduler/api/devices/nvidia/gpushare.predicateGPUbyNumber(0xc000dadac0?, 0x0)
    /go/src/volcano.sh/volcano/pkg/scheduler/api/devices/nvidia/gpushare/share.go:166 +0x41
volcano.sh/volcano/pkg/scheduler/api/devices/nvidia/gpushare.checkNodeGPUNumberPredicate(0xc000c68cf0?, 0x0)
    /go/src/volcano.sh/volcano/pkg/scheduler/api/devices/nvidia/gpushare/share.go:140 +0x3f
volcano.sh/volcano/pkg/scheduler/api/devices/nvidia/gpushare.(*GPUDevices).FilterNode(0x1c8bec0?, 0xc000dadac0)
    /go/src/volcano.sh/volcano/pkg/scheduler/api/devices/nvidia/gpushare/device_info.go:161 +0x157
volcano.sh/volcano/pkg/scheduler/plugins/predicates.(*predicatesPlugin).OnSessionOpen.func4(0xc000848be0, 0xc0004e0180)
    /go/src/volcano.sh/volcano/pkg/scheduler/plugins/predicates/predicates.go:522 +0x16e4
volcano.sh/volcano/pkg/scheduler/framework.(*Session).PredicateFn(0xc001094000, 0xc00100df80?, 0x0?)
    /go/src/volcano.sh/volcano/pkg/scheduler/framework/session_plugins.go:615 +0x1ce
volcano.sh/volcano/pkg/scheduler/actions/allocate.(*Action).Execute.func1(0xc000848be0, 0xc0004e0180)
    /go/src/volcano.sh/volcano/pkg/scheduler/actions/allocate/allocate.go:106 +0x1cb
volcano.sh/volcano/pkg/scheduler/util.(*predicateHelper).PredicateNodes.func1(0xc0002045a0?)
    /go/src/volcano.sh/volcano/pkg/scheduler/util/predicate_helper.go:73 +0x3a2
k8s.io/client-go/util/workqueue.ParallelizeUntil.func1()
    /go/src/volcano.sh/volcano/vendor/k8s.io/client-go/util/workqueue/parallelizer.go:90 +0x106
created by k8s.io/client-go/util/workqueue.ParallelizeUntil
    /go/src/volcano.sh/volcano/vendor/k8s.io/client-go/util/workqueue/parallelizer.go:76 +0x1d7
I0816 12:30:34.061465       1 statement.go:352] Discarding operations ...
I0816 12:30:34.061494       1 allocate.go:135] Try to allocate resource to Jobs in Queue <default>
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
    panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x15ab261]

goroutine 334 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00102cf70?})
    /go/src/volcano.sh/volcano/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:56 +0xd7
panic({0x1caa960, 0x32ac650})
    /usr/local/go/src/runtime/panic.go:884 +0x212
volcano.sh/volcano/pkg/scheduler/api/devices/nvidia/gpushare.getDevicesIdleGPUs(...)
    /go/src/volcano.sh/volcano/pkg/scheduler/api/devices/nvidia/gpushare/share.go:64
volcano.sh/volcano/pkg/scheduler/api/devices/nvidia/gpushare.predicateGPUbyNumber(0xc000dadac0?, 0x0)
    /go/src/volcano.sh/volcano/pkg/scheduler/api/devices/nvidia/gpushare/share.go:166 +0x41
volcano.sh/volcano/pkg/scheduler/api/devices/nvidia/gpushare.checkNodeGPUNumberPredicate(0xc000c68cf0?, 0x0)
    /go/src/volcano.sh/volcano/pkg/scheduler/api/devices/nvidia/gpushare/share.go:140 +0x3f
volcano.sh/volcano/pkg/scheduler/api/devices/nvidia/gpushare.(*GPUDevices).FilterNode(0x1c8bec0?, 0xc000dadac0)
    /go/src/volcano.sh/volcano/pkg/scheduler/api/devices/nvidia/gpushare/device_info.go:161 +0x157
volcano.sh/volcano/pkg/scheduler/plugins/predicates.(*predicatesPlugin).OnSessionOpen.func4(0xc000848be0, 0xc0004e0180)
    /go/src/volcano.sh/volcano/pkg/scheduler/plugins/predicates/predicates.go:522 +0x16e4
volcano.sh/volcano/pkg/scheduler/framework.(*Session).PredicateFn(0xc001094000, 0xc00100df80?, 0x0?)
    /go/src/volcano.sh/volcano/pkg/scheduler/framework/session_plugins.go:615 +0x1ce
volcano.sh/volcano/pkg/scheduler/actions/allocate.(*Action).Execute.func1(0xc000848be0, 0xc0004e0180)
    /go/src/volcano.sh/volcano/pkg/scheduler/actions/allocate/allocate.go:106 +0x1cb
volcano.sh/volcano/pkg/scheduler/util.(*predicateHelper).PredicateNodes.func1(0xc0002045a0?)
    /go/src/volcano.sh/volcano/pkg/scheduler/util/predicate_helper.go:73 +0x3a2
k8s.io/client-go/util/workqueue.ParallelizeUntil.func1()
    /go/src/volcano.sh/volcano/vendor/k8s.io/client-go/util/workqueue/parallelizer.go:90 +0x106
created by k8s.io/client-go/util/workqueue.ParallelizeUntil
    /go/src/volcano.sh/volcano/vendor/k8s.io/client-go/util/workqueue/parallelizer.go:76 +0x1d7
..

3个节点, 10.122.2.26 / 10.122.2.37 是 gpu 机器, ;10.122.2.14是 cpu 机器。 切换成 nvidia.com/gpu 这个资源,直接调度失败。 目前原因未知

是7月份部署的。 latest 镜像

kubectl apply -f https://raw.githubusercontent.com/volcano-sh/volcano/master/installer/volcano-development.yamlkubectl apply -f https://raw.githubusercontent.com/volcano-sh/volcano/master/installer/volcano-development.yaml
wangyang0616 commented 1 year ago

Would it be convenient for you to provide the following information so that we can locate the problem, we will be very grateful.

  1. Panic stack information.
  2. The yaml file installed by the volcano device plugin.
  3. The yaml file of the workload. (if it involves business information, it can be desensitized.)
oldthreefeng commented 1 year ago
$ k get ds volcano-device-plugin -o yaml | neat   
apiVersion: apps/v1
kind: DaemonSet
metadata:
  annotations:
    deprecated.daemonset.template.generation: "6"
  name: volcano-device-plugin
  namespace: kube-system
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      name: volcano-device-plugin
  template:
    metadata:
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ""
      creationTimestamp: null
      labels:
        name: volcano-device-plugin
    spec:
      containers:
      - args:
        - --gpu-strategy=number
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        image: volcanosh/volcano-device-plugin:latest
        imagePullPolicy: IfNotPresent
        name: volcano-device-plugin
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - SYS_ADMIN
            drop:
            - ALL
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/lib/kubelet/device-plugins
          name: device-plugin
        - mountPath: /usr/local/vgpu
          name: lib
        - mountPath: /tmp
          name: hosttmp
      dnsPolicy: ClusterFirst
      nodeSelector:
        nvidia-device-enable: enable
      priorityClassName: system-node-critical
      restartPolicy: Always
      schedulerName: default-scheduler
      serviceAccount: volcano-device-plugin
      serviceAccountName: volcano-device-plugin
      terminationGracePeriodSeconds: 30
      tolerations:
      - key: CriticalAddonsOnly
        operator: Exists
      - effect: NoSchedule
        key: volcano.sh/gpu-memory
        operator: Exists
      - key: volcano.sh/gpu
        operator: Exists
      volumes:
      - hostPath:
          path: /var/lib/kubelet/device-plugins
          type: ""
        name: device-plugin
      - hostPath:
          path: /usr/local/vgpu
          type: ""
        name: lib
      - hostPath:
          path: /tmp
          type: ""
        name: hosttmp
  updateStrategy:
    rollingUpdate:
      maxSurge: 0
      maxUnavailable: 1
    type: RollingUpdate

@wangyang0616

oldthreefeng commented 1 year ago

yaml of workload is vcjob. as edit.

when change #volcano.sh/gpu-number: 1 to nvidia.com/gpu: 1 , error go away, but pod did not schduler. is pending. but resource is enough.

AshinWu commented 10 months ago

This is because volcano-device-plugin is a daemon service, and some nodes may cause pod exceptions due to a lack of GPU resources. The temporary solution is to use taints or affinity to bypass these abnormal nodes.