tkestack / gpu-manager

Other
817 stars 234 forks source link

安装gpu-manager失败,找不到nvidia-smi,节点没有`tencent.com/vcuda-core`和`tencent.com/vcuda-memory`。 #96

Closed cailun01 closed 3 years ago

cailun01 commented 3 years ago

您好,我近期一直试图安装gpu manager,但是没有成功。生成的pod找不到nvidia-smi,相关节点没有tencent.com/vcuda-coretencent.com/vcuda-memory这两个字段。

我的测试环境:

docker 17.03.2-ce
kubernetes v1.13.5

我的master节点没有GPU,node8节点有GPU。按照Readme要求,保证node8节点的docker runtime使用native runc,而不是nvidia-container-runtime,node8的daemon.json如下:

{
  "log-level": "debug",
  "live-restore": true,
  "icc": false,
  "storage-driver": "overlay",
  "insecure-registries": ["qce-reg.nucpoc.com"],
  "live-restore": true,
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "512m",
    "max-file": "3"
  }
}

在master节点安装gpu-admission。scheduler-policy-config.json为:

{
  "kind": "Policy",
  "apiVersion": "v1",
  "predicates": [
    {
      "name": "PodFitsHostPorts"
    },
    {
      "name": "PodFitsResources"
    },
    {
      "name": "NoDiskConflict"
    },
    {
      "name": "MatchNodeSelector"
    },
    {
      "name": "HostName"
    }
  ],
  "extenders": [
    {
      "urlPrefix": "http://127.0.0.1:3456/scheduler",
      "apiVersion": "v1beta1",
      "filterVerb": "predicates",
      "enableHttps": false,
      "nodeCacheCapable": false,
      "managedResources": [
        {
          "name": "tencent.com/vcuda-memory",
          "ignoredByScheduler": false
        },
        {
          "name": "tencent.com/vcuda-core",
          "ignoredByScheduler": false
        }
      ]
    }
  ],
  "hardPodAffinitySymmetricWeight": 10,
  "alwaysCheckAllPredicates": false
}

kube-scheduler.yaml为:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: kube-scheduler
    tier: control-plane
  name: kube-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
    - --address=0.0.0.0
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --policy-config-file=/etc/kubernetes/scheduler-policy-config.json
    - --use-legacy-policy-config=true
    - --leader-elect=true
    image: index-dev.qiniu.io/kelibrary/kube-scheduler:v1.13.5
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 0.0.0.0
        path: /healthz
        port: 10251
        scheme: HTTP
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-scheduler
    resources:
      requests:
        cpu: 100m
    volumeMounts:
    - mountPath: /etc/kubernetes/scheduler.conf
      name: kubeconfig
      readOnly: true
    - mountPath: /etc/kubernetes/scheduler-policy-config.json
      name: scheduler-policy-config
      readOnly: true
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /etc/kubernetes/scheduler.conf
      type: FileOrCreate
    name: kubeconfig
  - hostPath:
      path: /etc/kubernetes/scheduler-policy-config.json
      type: FileOrCreate
    name: scheduler-policy-config
status: {}

成功安装了gpu-admission,并在master节点运行:

./bin/gpu-admission --address=127.0.0.1:3456 --v=4 --kubeconfig=/root/.kube/config --logtostderr=true

日志:

I0525 09:17:52.971037   30026 main.go:83] Server starting on 127.0.0.1:3456
I0525 09:17:52.971191   30026 reflector.go:175] Starting reflector *v1.Pod (30s) from pkg/mod/k8s.io/client-go@v0.18.12/tools/cache/reflector.go:125
I0525 09:17:52.971196   30026 reflector.go:175] Starting reflector *v1.Node (30s) from pkg/mod/k8s.io/client-go@v0.18.12/tools/cache/reflector.go:125
I0525 09:17:52.971288   30026 reflector.go:211] Listing and watching *v1.Node from pkg/mod/k8s.io/client-go@v0.18.12/tools/cache/reflector.go:125
I0525 09:17:52.971264   30026 reflector.go:211] Listing and watching *v1.Pod from pkg/mod/k8s.io/client-go@v0.18.12/tools/cache/reflector.go:125
I0525 09:25:54.025718   30026 reflector.go:496] pkg/mod/k8s.io/client-go@v0.18.12/tools/cache/reflector.go:125: Watch close - *v1.Node total 480 items received
I0525 09:27:35.164305   30026 reflector.go:496] pkg/mod/k8s.io/client-go@v0.18.12/tools/cache/reflector.go:125: Watch close - *v1.Pod total 1207 items received
I0525 09:34:13.027364   30026 reflector.go:496] pkg/mod/k8s.io/client-go@v0.18.12/tools/cache/reflector.go:125: Watch close - *v1.Node total 496 items received
I0525 09:34:46.165966   30026 reflector.go:496] pkg/mod/k8s.io/client-go@v0.18.12/tools/cache/reflector.go:125: Watch close - *v1.Pod total 889 items received

安装gpu-manager。

因为我的docker版本比较旧(17.03),不支持multi-stage build,所以对gpu-manager给的Dockerfile略作修改,一次生成镜像:

# 删掉了第一句ARG base_img
FROM nvidia/cuda:10.1-devel-centos7

ARG version
ARG commit

RUN yum install -y rpm-build make

# default git has problems while cloning some repository
RUN yum install -y https://repo.ius.io/ius-release-el7.rpm \
  && yum install -y git222

ENV GOLANG_VERSION 1.14.3
RUN curl -sSL https://dl.google.com/go/go${GOLANG_VERSION}.linux-amd64.tar.gz \
    | tar -C /usr/local -xz
ENV GOPROXY https://goproxy.cn,direct
ENV GOPATH /go
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH

RUN mkdir -p /root/rpmbuild/{SPECS,SOURCES,RPMS}

COPY gpu-manager.spec /root/rpmbuild/SPECS
COPY gpu-manager-source.tar.gz /root/rpmbuild/SOURCES

RUN echo '%_topdir /root/rpmbuild' > /root/.rpmmacros \
  && echo '%__os_install_post %{nil}' >> /root/.rpmmacros \
  && echo '%debug_package %{nil}' >> /root/.rpmmacros \
  && echo '%_rpmdir /root/rpmbuild/RPMS' >> /root/.rpmmacros
WORKDIR /root/rpmbuild/SPECS
RUN rpmbuild -bb --quiet \
  --define 'version '${version}'' \
  --define 'commit '${commit}'' \
  gpu-manager.spec
# 此处做了修改,原本是COPY --from=build /root/rpmbuild/RPMS/x86_64/gpu-manager-${version}-${commit}.el7.x86_64.rpm /tmp
RUN cp /root/rpmbuild/RPMS/x86_64/gpu-manager-${version}-${commit}.el7.x86_64.rpm /tmp

RUN yum install epel-release -y && \
  yum install -y which jq

# Install packages
RUN rpm -ivh /tmp/gpu-manager-${version}-${commit}.el7.x86_64.rpm \
    && rm -rf /tmp/gpu-manager-${version}-${commit}.el7.x86_64.rpm

# kubelet
VOLUME ["/var/lib/kubelet/device-plugins"]

# gpu manager storage
VOLUME ["/etc/gpu-manager/vm"]
VOLUME ["/etc/gpu-manager/vdriver"]
VOLUME ["/var/log/gpu-manager"]

# nvidia library search location
VOLUME ["/usr/local/host"]

RUN echo "/usr/local/nvidia/lib" > /etc/ld.so.conf.d/nvidia.conf && \
    echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf

ENV PATH=$PATH:/usr/local/nvidia/bin

# cgroup
VOLUME ["/sys/fs/cgroup"]

# display
EXPOSE 5678

COPY start.sh /
COPY copy-bin-lib.sh /

CMD ["/start.sh"]

编译生成了tkestack/gpu-manager镜像

REPOSITORY                    TAG            IMAGE ID            CREATED              SIZE
tkestack/gpu-manager          1.1.4          bfd67b0b9ae9        About a minute ago   4.34 GB

接下来准备生成pod,yaml文件如下:

apiVersion: v1
kind: Pod
metadata:
  name: vcuda
  annotations:
    tencent.com/vcuda-core-limit: "50"
spec:
  restartPolicy: Never
  containers:
  - image: nvidia/cuda:10.1-devel-centos7
    imagePullPolicy: Never
    name: nvidia
    command:
    - /usr/local/nvidia/bin/nvidia-smi
    - pmon
    - -d
    - "10"
    resources:
      requests:
        tencent.com/vcuda-core: "50"
        tencent.com/vcuda-memory: "30"
      limits:
        tencent.com/vcuda-core: "50"
        tencent.com/vcuda-memory: "30"
  nodeName: node8

但是,创建的vcuda这个pod找不到nvidia-smi。

# kubectl logs pod/vcuda
container_linux.go:247: starting container process caused "exec: \"/usr/local/nvidia/bin/nvidia-smi\": stat /usr/local/nvidia/bin/nvidia-smi: no such file or directory"

/usr/local/nvidia/bin/nvidia-smi改为nvidia-smi/usr/bin/nvidia-smi也同样找不到。

另外,我查看node8节点的状态,发现并没有tencent.com/vcuda-coretencent.com/vcuda-memory这两个字段。

Name:               node8
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=node8
                    nvidia-device-enable=enable
                    nvidia.com/type=1080Ti
Annotations:        csi.volume.kubernetes.io/nodeid: {"cephfs.csi.ceph.com":"node8","rbd.csi.ceph.com":"node8"}
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 04 Dec 2019 09:57:26 +0800
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Tue, 25 May 2021 10:08:30 +0800   Mon, 19 Apr 2021 08:31:54 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 25 May 2021 10:08:30 +0800   Tue, 25 May 2021 06:21:18 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 25 May 2021 10:08:30 +0800   Mon, 19 Apr 2021 08:31:54 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Tue, 25 May 2021 10:08:30 +0800   Mon, 19 Apr 2021 15:16:53 +0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.18.0.22
  Hostname:    node8
Capacity:
 cpu:                24
 ephemeral-storage:  204700Mi
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             65879704Ki
 nvidia.com/gpu:     0
 pods:               110
Allocatable:
 cpu:                24
 ephemeral-storage:  193179156161
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             65777304Ki
 nvidia.com/gpu:     0
 pods:               110
System Info:
 Machine ID:                 ef427f5b7f054701b7ac7bc12e5e49ec
 System UUID:                23bbab55-338b-11e7-9c43-bc0000de0000
 Boot ID:                    1e653f24-3770-428b-878a-3dd1cddf7a6d
 Kernel Version:             4.19.46
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.3.2
 Kubelet Version:            v1.13.5
 Kube-Proxy Version:         v1.13.5
PodCIDR:                     172.16.6.0/24
Non-terminated Pods:         (30 in total)
  Namespace                  Name                                                  CPU Requests  CPU Limits   Memory Requests  Memory Limits  AGE
  ---------                  ----                                                  ------------  ----------   ---------------  -------------  ---
  default                    csi-cephfs-ceph-csi-cephfs-nodeplugin-dzf9t           0 (0%)        0 (0%)       0 (0%)           0 (0%)         119d
  default                    csi-rbd-ceph-csi-rbd-nodeplugin-gwm9d                 0 (0%)        0 (0%)       0 (0%)           0 (0%)         119d
  default                    nginx-ingress-controller-default-thbv9                0 (0%)        0 (0%)       0 (0%)           0 (0%)         45d
  default                    qce-postgres-stolon-keeper-0                          0 (0%)        0 (0%)       0 (0%)           0 (0%)         36d
  default                    spark-master-0                                        2 (8%)        2 (8%)       4Gi (6%)         4Gi (6%)       36d
  kube-system                alert-apiserver-etcd-687458495f-mwwxm                 0 (0%)        0 (0%)       0 (0%)           0 (0%)         36d
  kube-system                calico-node-6zwzf                                     250m (1%)     0 (0%)       0 (0%)           0 (0%)         82d
  kube-system                calicotlb-compute-agent-n9n7g                         0 (0%)        0 (0%)       0 (0%)           0 (0%)         196d
  kube-system                elasticsearch-2                                       2 (8%)        2 (8%)       4Gi (6%)         4Gi (6%)       36d
  kube-system                kube-proxy-xs4cb                                      0 (0%)        0 (0%)       0 (0%)           0 (0%)         538d
  kube-system                logkit-9clcj                                          100m (0%)     512m (2%)    128Mi (0%)       2Gi (3%)       538d
  kube-system                prometheus-operator-prometheus-node-exporter-7v9g9    100m (0%)     1 (4%)       256Mi (0%)       2Gi (3%)       124d
  kube-system                prometheus-prometheus-operator-prometheus-1           20m (0%)      20m (0%)     100Mi (0%)       100Mi (0%)     41h
  kube-system                volume-exporter-k94t6                                 0 (0%)        0 (0%)       0 (0%)           0 (0%)         124d
  mysql8re                   deploy-mysqlrenuc-test-uwgrv2n5-cbb884494-pznqk       200m (0%)     200m (0%)    800Mi (1%)       800Mi (1%)     29h
  mysql8re                   mysqlnucdata-operator-5bc896b5d5-8r7km                300m (1%)     300m (1%)    500Mi (0%)       500Mi (0%)     30h
  mysql8re                   statefulset-mysqlrenucf-master-t1-poybszer-0          1100m (4%)    1100m (4%)   1324Mi (2%)      1324Mi (2%)    32d
  qce                        qce-postgres-stolon-keeper-1                          0 (0%)        0 (0%)       0 (0%)           0 (0%)         36d
  qiniu-mongors              deploy-mgors-0j237pdta860g049v921-55d6d45ff-6fmdz     200m (0%)     200m (0%)    800Mi (1%)       800Mi (1%)     10d
  qiniu-mongors              deploy-mgors-qq-0euwokua-6c86d5fd59-bhntt             200m (0%)     200m (0%)    800Mi (1%)       800Mi (1%)     3d17h
  qiniu-mongors              deploy-mgors-roiling-snake-6656c87b67-mbnfk           200m (0%)     200m (0%)    800Mi (1%)       800Mi (1%)     11d
  qiniu-mongors              mongorsdata-operator-54b67c6cc5-4swfn                 300m (1%)     300m (1%)    500Mi (0%)       500Mi (0%)     3h44m
  qiniu-mysql                mysql-operator-v2-645fcc7f6c-4jdmh                    300m (1%)     300m (1%)    500Mi (0%)       500Mi (0%)     15h
  qiniu-redis                deploy-redis-0bhkp2eta860g0gm3ae0-57dfd58f47-mptrh    200m (0%)     200m (0%)    800Mi (1%)       800Mi (1%)     3h12m
  qiniu-redis                statefulset-redis-03eqoreta860g0gm3290-0              5300m (22%)   5300m (22%)  10316Mi (16%)    10316Mi (16%)  42h
  qiniu-redis                statefulset-redis-0bhkp2eta860g0gm3ae0-stl-2          2300m (9%)    2300m (9%)   9292Mi (14%)     9292Mi (14%)   6d1h
  qiniu-redis                statefulset-redis-ke-redis-cluster-0                  500m (2%)     500m (2%)    1600Mi (2%)      1600Mi (2%)    8h
  qiniu-redis                statefulset-redis-t2-jmcyp5vi-stl-1                   1300m (5%)    1300m (5%)   3148Mi (4%)      3148Mi (4%)    16d
  test                       emqx-test-0                                           1 (4%)        1 (4%)       1Gi (1%)         1Gi (1%)       36d
  test                       mysql-7888cff686-dhrp5                                1 (4%)        1 (4%)       1Gi (1%)         1Gi (1%)       36d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests       Limits
  --------           --------       ------
  cpu                18870m (78%)   19932m (83%)
  memory             41904Mi (65%)  45616Mi (71%)
  ephemeral-storage  0 (0%)         0 (0%)
  nvidia.com/gpu     0              0

请问我的操作步骤哪里有问题?谢谢!

mYmNeo commented 3 years ago

你没有跑 gpu-manager

cailun01 commented 3 years ago

你没有跑 gpu-manager

请问如何“跑”GPU Manager?

dwbxm commented 2 years ago

@cailun01 如何判断kube-scheduler.yaml 生效了呢?