oomichi / try-kubernetes

12 stars 5 forks source link

3 test failures of [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality #39

Closed oomichi closed 5 years ago

oomichi commented 6 years ago

まとめ

oomichi commented 6 years ago

エラー時の状態 → 「pod has unbound PersistentVolumeClaims」からすると PersistentVolumesClaims の問題 ほかにも PVC 関連の e2e テストが失敗しているし。

$ kubectl describe pod ss-0 -n=e2e-tests-statefulset-bd6hp
Name:               ss-0
Namespace:          e2e-tests-statefulset-bd6hp
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             baz=blah
                    controller-revision-hash=ss-7cf5fb4c86
                    foo=bar
                    statefulset.kubernetes.io/pod-name=ss-0
Annotations:        <none>
Status:             Pending
IP:
Controlled By:      StatefulSet/ss
Containers:
  nginx:
    Image:        k8s.gcr.io/nginx-slim-amd64:0.20
    Port:         <none>
    Host Port:    <none>
    Readiness:    exec [test -f /data/statefulset-continue] delay=0s timeout=1s period=1s #success=1 #failure=1
    Environment:  <none>
    Mounts:
      /data/ from datadir (rw)
      /home from home (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-px4f7 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  datadir:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  datadir-ss-0
    ReadOnly:   false
  home:
    Type:          HostPath (bare host directory volume)
    Path:          /tmp/home
    HostPathType:
  default-token-px4f7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-px4f7
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  53s (x25 over 2m)  default-scheduler  pod has unbound PersistentVolumeClaims (repeated 2 times)
$
$ kubectl describe pvc datadir-ss-0 -n=e2e-tests-statefulset-bd6hp
Name:          datadir-ss-0
Namespace:     e2e-tests-statefulset-bd6hp
StorageClass:
Status:        Pending
Volume:
Labels:        baz=blah
               foo=bar
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
  Type    Reason         Age               From                         Message
  ----    ------         ----              ----                         -------
  Normal  FailedBinding  0s (x16 over 3m)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set
oomichi commented 6 years ago

上記 PVC の状態は「no persistent volumes available for this claim and no storage class is set」となっている。 storage class が設定されていないとあるが、確かにそれが無いといっている。 k8s up&running だと、「MS Azure だとデフォルトの storage class が提供されている」と 書かれているので運用者が事前設定するのが前提に見える。

$ kubectl get storageclass -n=e2e-tests-statefulset-bd6hp
No resources found.
$
$ kubectl get storageclass --all-namespaces
No resources found.
$
oomichi commented 6 years ago

PV が必要なんだな

NFS Server VM を作成

$ nova boot --key-name mykey --flavor m1.medium --image 73f70800-1d0c-4569-a3c5-29c70775c334 nfs

NFS server を設定

$ sudo apt-get update
$ sudo apt-get install -y nfs-kernel-server
$ sudo mkdir -p /opt/nfs
$ sudo su -c "echo '/opt/nfs 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)' >> /etc/exports"
$ sudo exportfs -ra
$ sudo systemctl enable nfs-kernel-server.service
$ sudo systemctl start nfs-kernel-server.service
$ sync
$ sudo reboot

動作確認:k8s-master から NFS Server に接続できることを確認

$ sudo apt-get install -y nfs-common
$ sudo mkdir /mnt/nfs
$ sudo mount -t nfs 192.168.1.105:/opt/nfs /mnt/nfs
$
$ sudo mount | grep nfs
192.168.1.105:/opt/nfs on /mnt/nfs type nfs4 (rw,relatime,vers=4.0,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.104,local_lock=none,addr=192.168.1.105)
$

対象 PV を作成

$ cat nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs001
  annotations:
    volume.beta.kubernetes.io/storage-class: "slow"
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    server: 192.168.1.105
    path: /opt/nfs
$ kubectl create -f nfs-pv.yaml
$
$ kubectl describe pv pv-nfs001
Name:            pv-nfs001
Labels:          <none>
Annotations:     volume.beta.kubernetes.io/storage-class=slow
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    slow
Status:          Available
Claim:
Reclaim Policy:  Recycle
Access Modes:    RWO
Capacity:        10Gi
Node Affinity:   <none>
Message:
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.1.105
    Path:      /opt/nfs
    ReadOnly:  false
$

これだけで実行しても引き続きエラー

$ kubectl describe pvc datadir-ss-0 -n=e2e-tests-statefulset-8gqtr
..
Events:
  Type    Reason         Age               From                         Message
  ----    ------         ----              ----                         -------
  Normal  FailedBinding  4s (x6 over 58s)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set
oomichi commented 6 years ago

StorageClass: slow を作らなければならないようだけど、NFS の場合 https://github.com/kubernetes-incubator/external-storage を利用する必要がありそう。

oomichi commented 6 years ago

Issue #19より

上記PVCのStorageClassが空になっている。
また、テストの注意書きにも「デフォルトのStorageClassに依存するのでConformanceにできない」
とある。

To enable dynamic storage provisioning based on storage class, the cluster administrator needs to
enable the DefaultStorageClass admission controller on the API server.
つまり、adminとしてDefaultStorageClass の設定が必要とのこと

Default StorageClass とは 「通常、PVC要求の際に StorageClass を指定する必要があるが、Default StorageClass を設定して  おくことで、PVC で特別指定する必要が無くなる」 というもの

oomichi commented 6 years ago

Default StorageClass を設定する

NFS provisioner 用 ServiceAccount を作成

$ cat nfs-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

NFS provisioner 用 Deployment を作成

$ cat nfs-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-nfs
            - name: NFS_SERVER
              value: 192.168.1.105
            - name: NFS_PATH
              value: /opt/nfs
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.1.105
            path: /opt/nfs
$
$ kubectl create -f nfs-deployment.yaml
deployment.extensions/nfs-client-provisioner created
$

Default StorageClass を作成 → 作成後、(default) がついていることを確認する

$ cat storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-nfs
$
$ kubectl create -f storage-class.yaml
storageclass.storage.k8s.io/nfs created
$
$ kubectl get storageclasses
NAME            PROVISIONER   AGE
nfs (default)   k8s-nfs       1m
$

PVCのEvent の内容がエラーではなくなった

$ kubectl describe pvc datadir-ss-0 -n=e2e-tests-statefulset-2k5rc
...
Events:
  Type    Reason                Age              From                         Message
  ----    ------                ----             ----                         -------
  Normal  ExternalProvisioning  0s (x8 over 1m)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "k8s-nfs" or manually created by system administrator

しかし Pod としては同じくエラーの内容

$ kubectl describe pod ss-0 -n=e2e-tests-statefulset-2k5rc
...
Events:
  Type     Reason            Age               From               Message
  ----     ------            ----              ----               -------
  Warning  FailedScheduling  1m (x25 over 3m)  default-scheduler  pod has unbound PersistentVolumeClaims (repeated 2 times)
oomichi commented 6 years ago

PVCが出力している「waiting for a volume to be created, either by external provisioner "k8s-nfs" or manually created by system administrator」を調査

この内容を見ると PVC に対して Volume 作成待ちしているように見える。 自動的に対象 Volume が出来る状態になっていない?設定が未だ不足?

シンプルな PVC を作ってみたが、同じログが出ている

$ cat nfs-pvc-test.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-claim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
$
$ kubectl create -f nfs-pvc-test.yaml
persistentvolumeclaim/test-claim created
$
$ kubectl describe pvc test-claim
Name:          test-claim
Namespace:     default
StorageClass:  nfs
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner=k8s-nfs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
  Type    Reason                Age   From                         Message
  ----    ------                ----  ----                         -------
  Normal  ExternalProvisioning  0s    persistentvolume-controller  waiting for a volume to be created, either by external provisioner "k8s-nfs" or manually created by system administrator

そもそも NFS provisioner がエラーログを出していた

$ kubectl describe pod nfs-client-provisioner-5894cc9b97-wqgcz
...
Events:
  Type     Reason       Age   From                 Message
  ----     ------       ----  ----                 -------
  Normal   Scheduled    36m   default-scheduler    Successfully assigned default/nfs-client-provisioner-5894cc9b97-wqgcz to k8s-node01
  Warning  FailedMount  36m   kubelet, k8s-node01  MountVolume.SetUp failed for volume "nfs-client-root" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/6b4a9165-9b64-11e8-b869-fa163e420595/volumes/kubernetes.io~nfs/nfs-client-root --scope -- mount -t nfs 192.168.1.105:/opt/nfs /var/lib/kubelet/pods/6b4a9165-9b64-11e8-b869-fa163e420595/volumes/kubernetes.io~nfs/nfs-client-root
Output: Running scope as unit run-rfeff1351d3da469b8529d7084249a063.scope.
mount: wrong fs type, bad option, bad superblock on 192.168.1.105:/opt/nfs,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
  Warning  FailedMount  36m  kubelet, k8s-node01  MountVolume.SetUp failed for volume "nfs-client-root" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/6b4a9165-9b64-11e8-b869-fa163e420595/volumes/kubernetes.io~nfs/nfs-client-root --scope -- mount -t nfs 192.168.1.105:/opt/nfs /var/lib/kubelet/pods/6b4a9165-9b64-11e8-b869-fa163e420595/volumes/kubernetes.io~nfs/nfs-client-root
Output: Running scope as unit run-r7ce26b5c744147abad6e0386235885e9.scope.
mount: wrong fs type, bad option, bad superblock on 192.168.1.105:/opt/nfs,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
oomichi commented 6 years ago

たぶん、192.168.1.105 に nfs-client-provisioner の Pod からアクセスできないからだ・・ -> ほかの Pod からアクセスしてみて、失敗した -> NFS Server側で全許可したが、引き続きエラーが出ている

# mount -t nfs 192.168.1.105:/opt/nfs /mnt/nfs
mount.nfs: access denied by server while mounting 192.168.1.105:/opt/nfs
oomichi commented 6 years ago

nfs-client-provisioner イメージの問題もあるようなので、Internal Provisioner の1つである GlusterFS を試す。 → GlusterFS の REST API を実現する Hekiti が必要だが、2名だけの貢献者の OSS でこの先どうだか・・ Fedoraでパッケージ提供されているが、Ubuntu ではパッケージが無い。

$ sudo apt-get update
$ sudo apt-get -y install glusterfs-server
$ sudo systemctl enable glusterfs-server
$ sync
$ sudo reboot
..
$ dd if=/dev/zero of=loop1.img bs=1M count=20000
$ sudo losetup /dev/loop0 loop1.img
$ sudo mkfs.ext4 /dev/loop0
$ sudo mkdir /glusterfs
$ sudo mount -t ext4 /dev/loop0 /glusterfs
$ sudo mkdir /glusterfs/distributed
$ sudo gluster volume create vol_distributed transport tcp glusterfs:/glusterfs/distributed
$ sudo gluster volume start vol_distributed
$ sudo gluster volume status
Status of volume: vol_distributed
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick glusterfs:/glusterfs/distributed      49152     0          Y       3306
NFS Server on localhost                     N/A       N/A        N       N/A

Task Status of Volume vol_distributed
------------------------------------------------------------------------------
There are no active volume tasks

ローカルで GlusterFS をマウントできることを確認

$ sudo mount -t glusterfs localhost:vol_distributed /mnt
$ sudo mount | grep gluster
/dev/loop0 on /glusterfs type ext4 (rw,relatime,data=ordered)
localhost:vol_distributed on /mnt type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
$

各nodeで glusterfs-client をインストール

$ sudo apt-get install glusterfs-client

各nodeで glusterfs サーバの名前解決できるようにする

$ sudo vi /etc/hosts
+ 192.168.1.111 glusterfs

各nodeで GlusterFS をマウントできることを確認

$ sudo mkdir /mnt/glusterfs
$ sudo mount -t glusterfs glusterfs:vol_distributed /mnt/glusterfs
$ sudo mount | grep glusterfs
glusterfs:vol_distributed on /mnt/glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
$
$ sudo umount /mnt/glusterfs

Heketiをインストール

$ wget https://github.com/heketi/heketi/releases/download/v7.0.0/heketi-v7.0.0.linux.amd64.tar.gz
$ tar xzvf heketi-v7.0.0.linux.amd64.tar.gz
$ cd heketi
$ sudo cp heketi /usr/local/bin/
$ sudo cp heketi.json /etc/
$

Heketiを設定

$ sudo mkdir /var/lib/heketi
$ sudo vi /etc/heketi.json
--- /home/ubuntu/heketi/heketi.json     2018-06-05 10:52:42.000000000 +0000
+++ /etc/heketi.json    2018-08-09 20:11:29.333758288 +0000
@@ -19,11 +19,11 @@
   "jwt": {
     "_admin": "Admin has access to all APIs",
     "admin": {
-      "key": "My Secret"
+      "key": "password"
     },
     "_user": "User only has access to /volumes endpoint",
     "user": {
-      "key": "My Secret"
+      "key": "password"
     }
   },
oomichi commented 6 years ago

OpenStack Cinderを使ってみる https://stackoverflow.com/questions/46067591/how-to-use-openstack-cinder-to-create-storage-class-and-dynamically-provision-pe

まとめ

cloud.conf を作成

$ sudo cat /etc/kubernetes/cloud.conf
[Global]
auth-url=http://iaas-ctrl:5000/v3
username=admin
password=ADMIN_PASS
region=RegionOne
tenant-name=admin
domain-name=Default

kube-controller-manager の設定変更

--- /home/ubuntu/etc/kube-controller-manager.yaml.orig  2018-08-09 22:24:52.484658198 +0000
+++ /etc/kubernetes/manifests/kube-controller-manager.yaml      2018-08-09 22:33:39.376593592 +0000
@@ -25,6 +25,8 @@
     - --root-ca-file=/etc/kubernetes/pki/ca.crt
     - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
     - --use-service-account-credentials=true
+    - --cloud-provider=openstack
+    - --cloud-config=/etc/kubernetes/cloud.conf
     image: k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
     imagePullPolicy: IfNotPresent
     livenessProbe:
@@ -61,6 +63,9 @@
     - mountPath: /etc/ssl/certs
       name: ca-certs
       readOnly: true
+    - mountPath: /etc/kubernetes/cloud.conf
+      name: k8s-cloud-conf
+      readOnly: true
   hostNetwork: true
   priorityClassName: system-cluster-critical
   volumes:
@@ -92,4 +97,8 @@
       path: /etc/kubernetes/pki
       type: DirectoryOrCreate
     name: k8s-certs
+  - hostPath:
+      path: /etc/kubernetes/cloud.conf
+      type: File
+    name: k8s-cloud-conf
 status: {}

kube-api-serverの設定変更

--- kube-apiserver.yaml.orig    2018-08-09 23:43:55.728712756 +0000
+++ /etc/kubernetes/manifests/kube-apiserver.yaml       2018-08-09 23:45:26.386106843 +0000
@@ -40,6 +40,8 @@
     - --service-cluster-ip-range=10.96.0.0/12
     - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
     - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
+    - --cloud-provider=openstack
+    - --cloud-config=/etc/kubernetes/cloud.conf
     image: k8s.gcr.io/kube-apiserver-amd64:v1.11.1
     imagePullPolicy: IfNotPresent
     livenessProbe:
@@ -71,6 +73,9 @@
     - mountPath: /usr/local/share/ca-certificates
       name: usr-local-share-ca-certificates
       readOnly: true
+    - mountPath: /etc/kubernetes/cloud.conf
+      name: k8s-cloud-conf
+      readOnly: true
   hostNetwork: true
   priorityClassName: system-cluster-critical
   volumes:
@@ -94,4 +99,8 @@
       path: /etc/ca-certificates
       type: DirectoryOrCreate
     name: etc-ca-certificates
+  - hostPath:
+      path: /etc/kubernetes/cloud.conf
+      type: File
+    name: k8s-cloud-conf
 status: {}

kubelet の設定変更 @全ノード

--- /var/lib/kubelet/config.yaml.orig   2018-08-02 16:57:23.865340698 +0000
+++ /var/lib/kubelet/config.yaml        2018-08-10 00:12:12.217178130 +0000
@@ -15,6 +15,8 @@
     cacheUnauthorizedTTL: 30s
 cgroupDriver: cgroupfs
 cgroupsPerQOS: true
+cloudProvider: openstack
+cloudConfig: /etc/kubernetes/cloud.conf
 clusterDNS:
 - 10.96.0.10
 clusterDomain: cluster.local

StorageClass を作成

$ cat storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gold
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/cinder
parameters:
  type: fast
  availability: nova
$
$ kubectl create -f storage-class.yaml

テストのため、PVC を作成

$ cat demo-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cinder-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
$
$ kubectl create -f demo-pvc.yaml

しかし PVC は Pending のまま

$ kubectl get pvc
NAME           STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cinder-claim   Pending                                       gold           3m

詳細を見ると Warning を出していた

$ kubectl describe pvc cinder-claim
Name:          cinder-claim
Namespace:     default
StorageClass:  gold
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/cinder
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
  Type     Reason              Age                From                         Message
  ----     ------              ----               ----                         -------
  Warning  ProvisioningFailed  10s (x15 over 3m)  persistentvolume-controller  Failed to provision volume with StorageClass "gold": failed to create a 1 GB volume: Resource not found
oomichi commented 6 years ago

Cinder の失敗原因調査

まとめ

OpenStack としての操作では Volume 作成はできている

$ cinder create 1
$ cinder list
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| ID                                   | Status    | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| 4dcce745-ca7d-4c38-afc7-3d85f9686a76 | available | -    | 1    | -           | false    |             |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
$

/var/log/containers/kube-controller-manager-k8s-master_kube-system_kube-controller-manager-c26c754d8e2654062b20d2cf4883ef469b67d716112d97452c56b3bb94845344.log このログからすると kubelet や kube-apiserver ではなく kube-controller-manager が Cinder への Volume 作成処理を行おうとしているのがわかる。

{"log":"I0809 23:29:31.405382       1 event.go:221] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"default\", Name:\"cinder-claim\", UID:\"0cb11d2d-9c2c-11e8-b869-fa163e420595\", APIVersion:\"v1\", ResourceVersion:\"2524134\", FieldPath:\"\"}): type: 'Warning' reason: 'ProvisioningFailed' Failed to provision volume with StorageClass \"gold\": failed to create a 1 GB volume: Resource not found\n","stream":"stderr","time":"2018-08-09T23:29:31.405961375Z"}

エラー箇所を特定 pkg/controller/volume/persistentvolume/pv_controller.go

1469         volume, err = provisioner.Provision(selectedNode, allowedTopologies)
1470         opComplete(&err)
1471         if err != nil {
1472                 // Other places of failure have nothing to do with DynamicProvisioningScheduling,
1473                 // so just let controller retry in the next sync. We'll only call func
1474                 // rescheduleProvisioning here when the underlying provisioning actually failed.
1475                 ctrl.rescheduleProvisioning(claim)
1476
1477                 strerr := fmt.Sprintf("Failed to provision volume with StorageClass %q: %v", storageClass.Name, err)
1478                 glog.V(2).Infof("failed to provision volume for claim %q with StorageClass %q: %v", claimToClaimKey(claim), storageClass.Name, err)
1479                 ctrl.eventRecorder.Event(claim, v1.EventTypeWarning, events.ProvisioningFailed, strerr)
1480                 return pluginName, err
1481         }

pkg/cloudprovider/providers/openstack/openstack_volumes.go volumes.createVolume(opts) の err で "Resource not found" が返っている createVolume() は Cinder のバージョンごとに用意されているが、Status Codeの違いのみ

465         volumeID, volumeAZ, err := volumes.createVolume(opts)
466
467         if err != nil {
468                 return "", "", "", os.bsOpts.IgnoreVolumeAZ, fmt.Errorf("failed to create a %d GB volume: %v", size, err)
469         }

pkg/cloudprovider/providers/openstack/openstack_volumes.go 下記の Create() で "Resource not found" が返っている

154 func (volumes *VolumesV3) createVolume(opts volumeCreateOpts) (string, string, error) {
155         startTime := time.Now()
156
157         createOpts := volumes_v3.CreateOpts{
158                 Name:             opts.Name,
159                 Size:             opts.Size,
160                 VolumeType:       opts.VolumeType,
161                 AvailabilityZone: opts.Availability,
162                 Metadata:         opts.Metadata,
163         }
164
165         vol, err := volumes_v3.Create(volumes.blockstorage, createOpts).Extract()
166         timeTaken := time.Since(startTime).Seconds()
167         recordOpenstackOperationMetric("create_v3_volume", timeTaken, err)
168         if err != nil {
169                 return "", "", err
170         }
171         return vol.ID, vol.AvailabilityZone, nil
172 }
...
 41         volumes_v3 "github.com/gophercloud/gophercloud/openstack/blockstorage/v3/volumes"

vendor/github.com/gophercloud/gophercloud/openstack/blockstorage/v3/volumes/requests.go ToVolumeCreateMap() と Post() どちらで "Resource not found" が返っているかは断定できない。 しかし、Resource not found という文字列からすると HTTP404 の可能性が高く、ここでは Post() として調査を進める

 49 // Create will create a new Volume based on the values in CreateOpts. To extract
 50 // the Volume object from the response, call the Extract method on the
 51 // CreateResult.
 52 func Create(client *gophercloud.ServiceClient, opts CreateOptsBuilder) (r CreateResult) {
 53         b, err := opts.ToVolumeCreateMap()
 54         if err != nil {
 55                 r.Err = err
 56                 return
 57         }
 58         _, r.Err = client.Post(createURL(client), b, &r.Body, &gophercloud.RequestOpts{
 59                 OkCodes: []int{202},
 60         })
 61         return
 62 }

createURL(client) は? vendor/github.com/gophercloud/gophercloud/openstack/blockstorage/v3/volumes/urls.go

  5 func createURL(c *gophercloud.ServiceClient) string {
  6         return c.ServiceURL("volumes")
  7 }

vendor/github.com/gophercloud/gophercloud/service_client.go

 41 // ServiceURL constructs a URL for a resource belonging to this provider.
 42 func (client *ServiceClient) ServiceURL(parts ...string) string {
 43         return client.ResourceBaseURL() + strings.Join(parts, "/")
 44 }
...
 33 // ResourceBaseURL returns the base URL of any resources used by this service. It MUST end with a /.
 34 func (client *ServiceClient) ResourceBaseURL() string {
 35         if client.ResourceBase != "" {
 36                 return client.ResourceBase
 37         }
 38         return client.Endpoint
 39 }

ServiceClient の設定処理? service types 文字列(volume, volumev2, volumev3) は OpenStack の Endpoints と同じ。

335 // NewBlockStorageV1 creates a ServiceClient that may be used to access the v1
336 // block storage service.
337 func NewBlockStorageV1(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) {
338         return initClientOpts(client, eo, "volume")
339 }
340
341 // NewBlockStorageV2 creates a ServiceClient that may be used to access the v2
342 // block storage service.
343 func NewBlockStorageV2(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) {
344         return initClientOpts(client, eo, "volumev2")
345 }
346
347 // NewBlockStorageV3 creates a ServiceClient that may be used to access the v3 block storage service.
348 func NewBlockStorageV3(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) {
349         return initClientOpts(client, eo, "volumev3")
350 }

たぶん REST API の URL が変なんだろうけど、URL に関するログ出力処理が一切無いのでキツイ・・ そもそも internal provider は Deprecated だそうな・・

{"log":"W0810 00:15:38.952519       1 plugins.go:112] WARNING: openstack built-in cloud provider is now deprecated. Please use 'external' cloud provider for openstack: https://github.com/ku
bernetes/cloud-provider-openstack\n","stream":"stderr","time":"2018-08-10T00:15:38.952620599Z"}
oomichi commented 6 years ago

StorageClass のために External cloud provider OpenStack を利用する。

権限設定

$ cat rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: standalone-cinder-provisioner

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: standalone-cinder-provisioner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: run-standalone-cinder-provisioner
subjects:
  - kind: ServiceAccount
    name: standalone-cinder-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: standalone-cinder-provisioner
  apiGroup: rbac.authorization.k8s.io

deploymentを作成

$ cat deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: standalone-cinder-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: standalone-cinder-provisioner
    spec:
      serviceAccountName: standalone-cinder-provisioner
      containers:
      - name: standalone-cinder-provisioner
        image: docker.io/k8scloudprovider/cinder-provisioner:latest
        imagePullPolicy: IfNotPresent
        env:
        - name: OS_AUTH_URL
          value: http://192.168.1.1:5000/v3
        - name: OS_USERNAME
          value: admin
        - name: OS_PASSWORD
          value: ADMIN_PASS
        - name: OS_TENANT_ID
          value: 682e74f275fe427abd9eb6759f3b68c5
        - name: OS_REGION_NAME
          value: RegionOne
        - name: OS_DOMAIN_NAME
          value: Default
$
$ kubectl create -f deployment.yaml
deployment.extensions/standalone-cinder-provisioner created
$

cloud.conf の作成

$ sudo cat /etc/kubernetes/cloud.conf
[Global]
auth-url=http://192.168.1.1:5000/v3
username=admin
password=ADMIN_PASS
region=RegionOne
tenant-name=admin
domain-name=Default

[BlockStorage]
bs-version=v3

kube-controller-manager の設定変更

$ sudo diff -u ../etc/kube-controller-manager.yaml.orig /etc/kubernetes/manifests/kube-controller-manager.yaml
--- ../etc/kube-controller-manager.yaml.orig    2018-08-09 22:24:52.484658198 +0000
+++ /etc/kubernetes/manifests/kube-controller-manager.yaml      2018-08-10 02:23:06.199040433 +0000
@@ -25,6 +25,8 @@
     - --root-ca-file=/etc/kubernetes/pki/ca.crt
     - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
     - --use-service-account-credentials=true
+    - --cloud-provider=openstack
+    - --cloud-config=/etc/kubernetes/cloud.conf
     image: k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
     imagePullPolicy: IfNotPresent
     livenessProbe:
@@ -61,6 +63,9 @@
     - mountPath: /etc/ssl/certs
       name: ca-certs
       readOnly: true
+    - mountPath: /etc/kubernetes/cloud.conf
+      name: k8s-cloud-conf
+      readOnly: true
   hostNetwork: true
   priorityClassName: system-cluster-critical
   volumes:
@@ -92,4 +97,8 @@
       path: /etc/kubernetes/pki
       type: DirectoryOrCreate
     name: k8s-certs
+  - hostPath:
+      path: /etc/kubernetes/cloud.conf
+      type: File
+    name: k8s-cloud-conf
 status: {}

PVCは引き続き

waiting for a volume to be created, either by external provisioner "k8s-cinder" or manually created by system administrator

deployment に PROVISIONER_NAME を指定、storageClass の Provisioner にそれを指定したが解消せず。 deployment の logs には何も出力されていない。

oomichi commented 6 years ago

https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/

oomichi commented 6 years ago

https://github.com/kubernetes-incubator/external-storage/tree/master/docs/demo/hostpath-provisioner

https://github.com/kubernetes/cloud-provider-openstack/blob/master/pkg/volume/cinder/provisioner/provisioner.go#L37

//ProvisionerName is the unique name of this provisioner
ProvisionerName = "openstack.org/standalone-cinder"

StorageClass でこの名前を指定する

$ cat storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gold
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: openstack.org/standalone-cinder
parameters:
  type: fast
  availability: nova

DeploymentではPROVISIONER_NAME は指定しない

$ cat deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: standalone-cinder-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: standalone-cinder-provisioner
    spec:
      serviceAccountName: standalone-cinder-provisioner
      containers:
      - name: standalone-cinder-provisioner
        image: docker.io/k8scloudprovider/cinder-provisioner:latest
        imagePullPolicy: IfNotPresent
        env:
        - name: OS_AUTH_URL
          value: http://192.168.1.1:5000/v3
        - name: OS_USERNAME
          value: admin
        - name: OS_PASSWORD
          value: ADMIN_PASS
        - name: OS_TENANT_ID
          value: 682e74f275fe427abd9eb6759f3b68c5
        - name: OS_REGION_NAME
          value: RegionOne
        - name: OS_DOMAIN_NAME
          value: Default

PVC としては変わらず、Pending だけど Pod がエラーをはくようになった。 → 原因は Keystone が返すエンドポイントに含まれる iaas-ctrl の名前解決できないため

$ kubectl logs standalone-cinder-provisioner-5f9c868867-lrvcj
I0810 21:24:24.287983       1 controller.go:492] Starting provisioner controller c0e88fcf-9ce3-11e8-9533-0a580af401cb!
I0810 21:25:39.493894       1 controller.go:1167] scheduleOperation[lock-provision-default/cinder-claim[eadefaf7-9ce3-11e8-a146-fa163e420595]]
I0810 21:25:39.594036       1 leaderelection.go:156] attempting to acquire leader lease...
I0810 21:25:39.624575       1 leaderelection.go:178] successfully acquired lease to provision for pvc default/cinder-claim
I0810 21:25:39.624861       1 controller.go:1167] scheduleOperation[provision-default/cinder-claim[eadefaf7-9ce3-11e8-a146-fa163e420595]]
E0810 21:25:39.697512       1 actions.go:71] Failed to create a 1 GiB volume: Post http://iaas-ctrl:8776/v2/682e74f275fe427abd9eb6759f3b68c5/volumes: dial tcp: lookup iaas-ctrl on 10.96.0.10:53: no such host
E0810 21:25:39.697549       1 provisioner.go:184] Failed to create volume
E0810 21:25:39.697567       1 controller.go:895] Failed to provision volume for claim "default/cinder-claim" with StorageClass "gold": Post http://iaas-ctrl:8776/v2/682e74f275fe427abd9eb6759f3b68c5/volumes: dial tcp: lookup iaas-ctrl on 10.96.0.10:53: no such host
E0810 21:25:39.697637       1 goroutinemap.go:150] Operation for "provision-default/cinder-claim[eadefaf7-9ce3-11e8-a146-fa163e420595]" failed. No retries permitted until 2018-08-10 21:25:40.197609235 +0000 UTC m=+76.308686603 (durationBeforeRetry 500ms). Error: "Post http://iaas-ctrl:8776/v2/682e74f275fe427abd9eb6759f3b68c5/volumes: dial tcp: lookup iaas-ctrl on 10.96.0.10:53: no such host"
I0810 21:25:41.644271       1 leaderelection.go:198] stopped trying to renew lease to provision for pvc default/cinder-claim, task failed

既存の Pod の /etc/hosts を確認

$ kubectl exec standalone-cinder-provisioner-5f9c868867-lrvcj -- cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.244.1.203    standalone-cinder-provisioner-5f9c868867-lrvcj

Deployment には hostAliases で追加可能らしい。

$ cat deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: standalone-cinder-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: standalone-cinder-provisioner
    spec:
      serviceAccountName: standalone-cinder-provisioner
      hostAliases:
      - ip: "192.168.1.1"
        hostnames:
        - "iaas-ctrl"
      containers:
      - name: standalone-cinder-provisioner
        image: docker.io/k8scloudprovider/cinder-provisioner:latest
        imagePullPolicy: IfNotPresent
        env:
        - name: OS_AUTH_URL
          value: http://iaas-ctrl:5000/v3
        - name: OS_USERNAME
          value: admin
        - name: OS_PASSWORD
          value: ADMIN_PASS
        - name: OS_TENANT_ID
          value: 682e74f275fe427abd9eb6759f3b68c5
        - name: OS_REGION_NAME
          value: RegionOne
        - name: OS_DOMAIN_NAME
          value: Default

できた

$ kubectl exec standalone-cinder-provisioner-7bfb84d4d6-8bz4j -- cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.244.1.204    standalone-cinder-provisioner-7bfb84d4d6-8bz4j

# Entries added by HostAliases.
192.168.1.1     iaas-ctrl

Resource not found になった・・Internal と同じ事象

I0810 22:06:01.796035       1 controller.go:492] Starting provisioner controller 918a1a6c-9ce9-11e8-8be4-0a580af401ce!
I0810 22:06:01.836948       1 controller.go:1167] scheduleOperation[lock-provision-default/cinder-claim[8a8b5234-9ce9-11e8-a146-fa163e420595]]
I0810 22:06:01.884646       1 leaderelection.go:156] attempting to acquire leader lease...
I0810 22:06:01.904134       1 leaderelection.go:178] successfully acquired lease to provision for pvc default/cinder-claim
I0810 22:06:01.904339       1 controller.go:1167] scheduleOperation[provision-default/cinder-claim[8a8b5234-9ce9-11e8-a146-fa163e420595]]
E0810 22:06:02.098094       1 actions.go:71] Failed to create a 1 GiB volume: Resource not found
E0810 22:06:02.098348       1 provisioner.go:184] Failed to create volume
E0810 22:06:02.100136       1 controller.go:895] Failed to provision volume for claim "default/cinder-claim" with StorageClass "gold": Resource not found
E0810 22:06:02.101034       1 goroutinemap.go:150] Operation for "provision-default/cinder-claim[8a8b5234-9ce9-11e8-a146-fa163e420595]" failed. No retries permitted until 2018-08-10 22:06:02.600670206 +0000 UTC m=+1.271658978 (durationBeforeRetry 500ms). Error: "Resource not found"
I0810 22:06:03.961259       1 leaderelection.go:198] stopped trying to renew lease to provision for pvc default/cinder-claim, task failed

Cinder APIログ: /var/log/apache2/cinder.log cinderclientで成功したときと同じ URL。たぶん、Request Body 内のパラメータが不正

192.168.1.108 - - [10/Aug/2018:14:47:32 -0700] "POST /v2/682e74f275fe427abd9eb6759f3b68c5/volumes HTTP/1.1" 202 791 "-" "python-cinderclient" 713557(us)
...
192.168.1.108 - - [10/Aug/2018:14:49:43 -0700] "GET /v2/682e74f275fe427abd9eb6759f3b68c5/volumes/detail HTTP/1.1" 200 1013 "-" "python-cinderclient" 238032(us)
192.168.1.109 - - [10/Aug/2018:15:01:00 -0700] "POST /v2/682e74f275fe427abd9eb6759f3b68c5/volumes HTTP/1.1" 404 92 "-" "gophercloud/2.0.0" 142015(us)
192.168.1.109 - - [10/Aug/2018:15:01:07 -0700] "POST /v2/682e74f275fe427abd9eb6759f3b68c5/volumes HTTP/1.1" 404 92 "-" "gophercloud/2.0.0" 140874(us)
192.168.1.109 - - [10/Aug/2018:15:01:23 -0700] "POST /v2/682e74f275fe427abd9eb6759f3b68c5/volumes HTTP/1.1" 404 92 "-" "gophercloud/2.0.0" 139617(us)

OpenStack, k8s の改造したくなかったので tcpdump より

{"itemNotFound": {"message": "Volume type with name fast could not be found.", "code": 404}}
oomichi commented 6 years ago

volume-type のエラー修正 StorageClass の設定から type を削除、しかし引き続きエラーがでる

I0810 23:12:36.116700       1 controller.go:492] Starting provisioner controller de57af85-9cf2-11e8-98f3-0a580af401d0!
I0810 23:12:40.221213       1 controller.go:1167] scheduleOperation[lock-provision-default/cinder-claim[e0c79084-9cf2-11e8-a146-fa163e420595]]
I0810 23:12:40.252366       1 leaderelection.go:156] attempting to acquire leader lease...
I0810 23:12:40.265841       1 leaderelection.go:178] successfully acquired lease to provision for pvc default/cinder-claim
I0810 23:12:40.266030       1 controller.go:1167] scheduleOperation[provision-default/cinder-claim[e0c79084-9cf2-11e8-a146-fa163e420595]]
E0810 23:12:46.488867       1 clusterbroker.go:43] Failed to create chap secret in namespace default: secrets is forbidden: User "system:serviceaccount:default:standalone-cinder-provisioner" cannot create secrets in the namespace "default"
E0810 23:12:46.488983       1 provisioner.go:220] Failed to prepare volume auth: secrets is forbidden: User "system:serviceaccount:default:standalone-cinder-provisioner" cannot create secrets in the namespace "default"
E0810 23:12:48.100114       1 controller.go:895] Failed to provision volume for claim "default/cinder-claim" with StorageClass "gold": secrets is forbidden: User "system:serviceaccount:default:standalone-cinder-provisioner" cannot create secrets in the namespace "default"
E0810 23:12:48.100504       1 goroutinemap.go:150] Operation for "provision-default/cinder-claim[e0c79084-9cf2-11e8-a146-fa163e420595]" failed. No retries permitted until 2018-08-10 23:12:48.600269686 +0000 UTC m=+12.948564491 (durationBeforeRetry 500ms). Error: "secrets is forbidden: User \"system:serviceaccount:default:standalone-cinder-provisioner\" cannot create secrets in the namespace \"default\""
I0810 23:12:48.477162       1 leaderelection.go:198] stopped trying to renew lease to provision for pvc default/cinder-claim, task failed
I0810 23:12:51.159021       1 controller.go:1167] scheduleOperation[provision-default/cinder-claim[e0c79084-9cf2-11e8-a146-fa163e420595]]

Cinder側のエラーは解消している: gophercloud からの POST 要求が 202 になっている。

192.168.1.109 - - [10/Aug/2018:16:14:06 -0700] "POST /v2/682e74f275fe427abd9eb6759f3b68c5/volumes HTTP/1.1" 202 838 "-" "gophercloud/2.0.0" 723122(us)
192.168.1.109 - - [10/Aug/2018:16:14:10 -0700] "GET /v2/682e74f275fe427abd9eb6759f3b68c5/volumes/92f2ee47-bd94-4c2e-a2b1-884e8673c4c4 HTTP/1.1" 200 1057 "-" "gophercloud/2.0.0" 169753(us)
192.168.1.109 - - [10/Aug/2018:16:14:10 -0700] "POST /v2/682e74f275fe427abd9eb6759f3b68c5/volumes/92f2ee47-bd94-4c2e-a2b1-884e8673c4c4/action HTTP/1.1" 202 - "-" "gophercloud/2.0.0" 226007(us)

対象の ClusterRole に以下を追加

  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["list", "watch", "create", "update", "patch"]

やっとできた

$ kubectl logs standalone-cinder-provisioner-7d6594d789-n5z97
I0810 23:25:24.112695       1 controller.go:492] Starting provisioner controller a81a198d-9cf4-11e8-bd81-0a580af401d1!
I0810 23:25:43.659028       1 controller.go:1167] scheduleOperation[lock-provision-default/cinder-claim[af01ada4-9cf4-11e8-a146-fa163e420595]]
I0810 23:25:43.707991       1 leaderelection.go:156] attempting to acquire leader lease...
I0810 23:25:43.736478       1 leaderelection.go:178] successfully acquired lease to provision for pvc default/cinder-claim
I0810 23:25:43.738032       1 controller.go:1167] scheduleOperation[provision-default/cinder-claim[af01ada4-9cf4-11e8-a146-fa163e420595]]
I0810 23:25:50.133469       1 controller.go:1167] scheduleOperation[provision-default/cinder-claim[af01ada4-9cf4-11e8-a146-fa163e420595]]
I0810 23:25:50.148800       1 controller.go:900] volume "pvc-af01ada4-9cf4-11e8-a146-fa163e420595" for claim "default/cinder-claim" created
I0810 23:25:50.189688       1 controller.go:917] volume "pvc-af01ada4-9cf4-11e8-a146-fa163e420595" for claim "default/cinder-claim" saved
I0810 23:25:50.190113       1 controller.go:953] volume "pvc-af01ada4-9cf4-11e8-a146-fa163e420595" provisioned for claim "default/cinder-claim"
I0810 23:25:51.885102       1 leaderelection.go:198] stopped trying to renew lease to provision for pvc default/cinder-claim, task succeeded
$ kubectl get pvc
NAME           STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cinder-claim   Bound     pvc-af01ada4-9cf4-11e8-a146-fa163e420595   1Gi        RWO            gold           31s

Cinder的にも Volume が作られていることを確認

$ cinder list
+--------------------------------------+--------+---------------------------------------------------------+------+-------------+----------+-------------+
|                  ID                  | Status |                           Name                          | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+---------------------------------------------------------+------+-------------+----------+-------------+
| 0bec46ee-8aef-4af8-9350-1f835eec1a5a | in-use | cinder-dynamic-pvc-b3ceb458-9cf4-11e8-bd81-0a580af401d1 |  1   |      -      |  false   |     None    |
+--------------------------------------+--------+---------------------------------------------------------+------+-------------+----------+-------------+
oomichi commented 6 years ago

手順を整理

  1. RBAC 設定
    
    $ cat rbac.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: standalone-cinder-provisioner

apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: standalone-cinder-provisioner rules:


apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: run-standalone-cinder-provisioner subjects:

oomichi commented 6 years ago

大元の問題の e2e テストが通ることを確認する → 通った

$ go run hack/e2e.go -- --provider=skeleton --test --test_args="--ginkgo.focus=should\sprovide\sbasic\sidentity" --check-version-skew=false
...
~ [SLOW TEST:186.335 seconds]
[sig-apps] StatefulSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:679
    should provide basic identity
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:93
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 10 23:40:32.970: INFO: Running AfterSuite actions on all node
Aug 10 23:40:32.970: INFO: Running AfterSuite actions on node 1

Ran 1 of 999 Specs in 186.456 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 998 Skipped PASS

Ginkgo ran 1 suite in 3m6.697093796s
Test Suite Passed
2018/08/10 23:40:32 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.focus=should\sprovide\sbasic\sidentity' finished in 3m6.915209756s
2018/08/10 23:40:32 e2e.go:83: Done
$

全 StatefulSetBasic 9テストを実施 → 1つ Fail した。

$ go run hack/e2e.go -- --provider=skeleton --test --test_args="--ginkgo.focus=\[StatefulSetBasic\]" --check-version-skew=false
...
~ Failure [1874.137 seconds]
[sig-apps] StatefulSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:679
    should not deadlock when a pod's predecessor fails [It]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:217

    Aug 11 00:05:41.870: Failed waiting for pods to enter running: timed out waiting for the condition

    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:323
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 11 00:26:04.996: INFO: Running AfterSuite actions on all node
Aug 11 00:26:04.996: INFO: Running AfterSuite actions on node 1

Summarizing 1 Failure:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] should not deadlock when a pod's predecessor fails
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:323

Ran 9 of 999 Specs in 2651.584 seconds
FAIL! -- 8 Passed | 1 Failed | 0 Pending | 990 Skipped --- FAIL: TestE2E (2651.61s)
FAIL

Ginkgo ran 1 suite in 44m11.814377986s
Test Suite Failed
!!! Error in ./hack/ginkgo-e2e.sh:143
  Error in ./hack/ginkgo-e2e.sh:143. '"${ginkgo}" "${ginkgo_args[@]:+${ginkgo_args[@]}}" "${e2e_test}" -- "${auth_config[@]:+${auth_config[@]}}" --ginkgo.flakeAttempts="${FLAKE_ATTEMPTS}" --host="${KUBE_MASTER_URL}" --provider="${KUBERNETES_PROVIDER}" --gce-project="${PROJECT:-}" --gce-zone="${ZONE:-}" --gce-region="${REGION:-}" --gce-multizone="${MULTIZONE:-false}" --gke-cluster="${CLUSTER_NAME:-}" --kube-master="${KUBE_MASTER:-}" --cluster-tag="${CLUSTER_ID:-}" --cloud-config-file="${CLOUD_CONFIG:-}" --repo-root="${KUBE_ROOT}" --node-instance-group="${NODE_INSTANCE_GROUP:-}" --prefix="${KUBE_GCE_INSTANCE_PREFIX:-e2e}" --network="${KUBE_GCE_NETWORK:-${KUBE_GKE_NETWORK:-e2e}}" --node-tag="${NODE_TAG:-}" --master-tag="${MASTER_TAG:-}" --cluster-monitoring-mode="${KUBE_ENABLE_CLUSTER_MONITORING:-standalone}" --prometheus-monitoring="${KUBE_ENABLE_PROMETHEUS_MONITORING:-false}" ${KUBE_CONTAINER_RUNTIME:+"--container-runtime=${KUBE_CONTAINER_RUNTIME}"} ${MASTER_OS_DISTRIBUTION:+"--master-os-distro=${MASTER_OS_DISTRIBUTION}"} ${NODE_OS_DISTRIBUTION:+"--node-os-distro=${NODE_OS_DISTRIBUTION}"} ${NUM_NODES:+"--num-nodes=${NUM_NODES}"} ${E2E_REPORT_DIR:+"--report-dir=${E2E_REPORT_DIR}"} ${E2E_REPORT_PREFIX:+"--report-prefix=${E2E_REPORT_PREFIX}"} "${@:-}"' exited with status 1
Call stack:
  1: ./hack/ginkgo-e2e.sh:143 main(...)
Exiting with status 1
2018/08/11 00:26:05 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.focus=\[StatefulSetBasic\]' finished in 44m12.025884838s
2018/08/11 00:26:05 main.go:309: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[StatefulSetBasic\]: exit status 1]
2018/08/11 00:26:05 e2e.go:81: err: exit status 1
exit status 1
oomichi commented 6 years ago
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] should not deadlock when a pod's predecessor fails

の失敗原因調査 テストの目的: pod's predecessor が失敗するとき、deadlock しないことをテストする。 そもそも StatefulSets とは? (according to "k8s up&running")

テストの動作概要

  1. 新 Pod 作成を Pause 状態で Replica数2で StatefulSet を作成する
  2. 1つ目の Stateful Pod (index: 0) が running になるのを待つ
  3. index: 0 の Pod を再開(healthyにする)
  4. index: 1 の Stateful Pod が running になるのを待つ コメント: 「ここまでの処理で 1つの healthy な Stateful Pod と unhealthy な Stateful Pod を持つ。 healthy Stateful Pod の削除は新たな stateful pod の作成を行うべきではない、残りの Stateful pod が healthy になるまで。それは我々が healthy bit をセットするまで発生しない状況だ」 ★上記のコメントは誤り、テスト変更に伴い修正すべきだったが更新漏れ。k/kubernetes/pull/67411で修正提案中
  5. index: 0 の Stateful Pod を削除する
  6. index: 0 の Stateful Pod が再作成されたことを確認する ★テストは 6 の Pod 確認でタイムアウトして失敗している
    Confirming stateful pod at index 0 is recreated.
    Aug 11 13:35:48.693: INFO: Found 1 stateful pods, waiting for 2
    Aug 11 13:35:58.700: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
    ... (Waiting for pod ss-0 が続く)
  7. index: 1 の Pod を再開(healthyにする)
  8. StatefulSet の全ての Pod が作成されたことを確認する
    $ kubectl get pods -n=e2e-tests-statefulset-d7gtc
    NAME      READY     STATUS              RESTARTS   AGE
    ss-0      0/1       ContainerCreating   0          36s
    ss-1      0/1       Running             0          56s
    $
    $ kubectl describe pod ss-0 -n=e2e-tests-statefulset-d7gtc
    Name:               ss-0
    Namespace:          e2e-tests-statefulset-d7gtc
    Priority:           0
    PriorityClassName:  <none>
    Node:               k8s-node01/192.168.1.109
    Start Time:         Tue, 14 Aug 2018 17:33:04 +0000
    Labels:             baz=blah
                    controller-revision-hash=ss-7cf5fb4c86
                    foo=bar
                    statefulset.kubernetes.io/pod-name=ss-0
    Annotations:        <none>
    Status:             Pending
    IP:
    Controlled By:      StatefulSet/ss
    Containers:
    nginx:
    Container ID:
    Image:          k8s.gcr.io/nginx-slim-amd64:0.20
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Readiness:      exec [test -f /data/statefulset-continue] delay=0s timeout=1s period=1s #success=1 #failure=1
    Environment:    <none>
    Mounts:
      /data/ from datadir (rw)
      /home from home (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qxsl9 (ro)
    Conditions:
    Type              Status
    Initialized       True
    Ready             False
    ContainersReady   False
    PodScheduled      True
    Volumes:
    datadir:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  datadir-ss-0
    ReadOnly:   false
    home:
    Type:          HostPath (bare host directory volume)
    Path:          /tmp/home
    HostPathType:
    default-token-qxsl9:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-qxsl9
    Optional:    false
    QoS Class:       BestEffort
    Node-Selectors:  <none>
    Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
    Type     Reason       Age                From                 Message
    ----     ------       ----               ----                 -------
    Normal   Scheduled    56s                default-scheduler    Successfully assigned e2e-tests-statefulset-d7gtc/ss-0 to k8s-node01
    Warning  FailedMount  22s (x7 over 55s)  kubelet, k8s-node01  MountVolume.WaitForAttach failed for volume "pvc-fb74a228-9fe7-11e8-a146-fa163e420595" : iscsi: failed to create new iface: iscsiadm: Could not create new interface 192.168.1.1:3260:pvc-fb74a228-9fe7-11e8-a146-fa163e420595. (exit status 15)
    Warning  FailedMount  1m (x4 over 8m)    kubelet, k8s-node01  Unable to mount volumes for pod "ss-0_e2e-tests-statefulset-d7gtc(19bac8fb-9fe8-11e8-a146-fa163e420595)": timeout expired waiting for volumes to attach or mount for pod "e2e-tests-statefulset-d7gtc"/"ss-0". list of unmounted volumes=[datadir]. list of unattached volumes=[datadir home default-token-qxsl9]
    $

    問題はコレ

    MountVolume.WaitForAttach failed for volume "pvc-fb74a228-9fe7-11e8-a146-fa163e420595" : iscsi: failed to create new iface: iscsiadm: Could not create new interface 192.168.1.1:3260:pvc-fb74a228-9fe7-11e8-a146-fa163e420595.
    (exit status 15)
oomichi commented 6 years ago

iscsiadm の基本を理解する

接続手順

  1. ターゲットの検知
    # iscsiadm -m discovery -t sendtargets -p <iscsiターゲット側の IPアドレス>
  2. ターゲットへのログイン
    # iscsiadm -m node --login -p <iscsiターゲット側の IPアドレス>
  3. 接続確認
    # iscsiadm -m session

k8s-node01 で試してみる

何もしていないとき

$ sudo iscsiadm -m discovery -t sendtargets -p 192.168.1.1
192.168.1.1:3260,1 iqn.2010-10.org.openstack:volume-bcb0de59-a4f9-4901-b869-d02d2dc2f06b
$
$ sudo iscsiadm -m session
iscsiadm: No active sessions.
$
oomichi commented 6 years ago
192.168.1.1:3260:pvc-fb74a228-9fe7-11e8-a146-fa163e420595

iaas-ctrl (192.168.1.1) で tgtd が Listen していることを確認

$ sudo netstat -anp | grep 3260
tcp        0      0 0.0.0.0:3260            0.0.0.0:*               LISTEN      980/tgtd

tgtd のログ(syslog)を確認 → エラーを示すものは無し

Aug 14 10:32:26 localhost tgtd[980]: tgtd: device_mgmt(246) sz:69 params:path=/dev/cinder-volumes/volume-1176b438-940f-4001-af22-b09fab94b7e6
Aug 14 10:32:26 localhost tgtd[980]: tgtd: bs_thread_open(409) 16
Aug 14 10:32:51 localhost tgtd[980]: tgtd: device_mgmt(246) sz:69 params:path=/dev/cinder-volumes/volume-5f9c95a5-28c0-415f-9a5c-d01826387d78
Aug 14 10:32:51 localhost tgtd[980]: tgtd: bs_thread_open(409) 16
Aug 14 11:02:18 localhost tgtd[980]: tgtd: conn_close_admin(237) close 1b 0
Aug 14 11:02:18 localhost tgtd[980]: tgtd: conn_close_admin(237) close 1c 0
oomichi commented 6 years ago

node01のsyslog

Aug 14 17:32:27 localhost kubelet[1006]: I0814 17:32:27.926985    1006 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-fb74a228-9fe7-11e8-a146-fa163e420595" (UniqueName: "kubernetes.io/iscsi/192.168.1.1:3260:iqn.2010-10.org.openstack:volume-1176b438-940f-4001-af22-b09fab94b7e6:1") pod "ss-0" (UID: "fb75a70a-9fe7-11e8-a146-fa163e420595")
Aug 14 17:32:27 localhost kubelet[1006]: E0814 17:32:27.927336    1006 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/iscsi/192.168.1.1:3260:
iqn.2010-10.org.openstack:volume-1176b438-940f-4001-af22-b09fab94b7e6:1\"" failed. No retries permitted until 2018-08-14 17:32:28.427162842 +0000 UTC m=+407716.613168366 (durationBeforeRetry 500ms). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-fb74a228-9fe7-11e8-a146-fa163e420595\" (UniqueName: \"kubernetes.io/iscsi192.168.1.1:3260:iqn.2010-10.org.openstack:volume-1176b438-940f-4001-af22-b09fab94b7e6:1\") pod \"ss-0\" (UID: \"fb75a70a-9fe7-11e8-a146-fa163e420595\") "

問題の ss-0 向け volume (pvc-fb74a228-9fe7-11e8-a146-fa163e420595) が VolumesInUse のリストへの追加に失敗した。

Aug 14 17:32:29 localhost kubelet[1006]: E0814 17:32:29.601485    1006 iscsi_util.go:250] iscsi: failed to rescan session with error: iscsiadm: No session found.
Aug 14 17:32:29 localhost kubelet[1006]:  (exit status 21)
Aug 14 17:32:30 localhost kubelet[1006]: W0814 17:32:30.925718    1006 iscsi_util.go:301] Warning: Failed to set iSCSI login mode to manual. Error: exit status 7
Aug 14 17:32:32 localhost kubelet[1006]: I0814 17:32:32.097219    1006 mount_linux.go:484] `fsck` error fsck from util-linux 2.27.1
Aug 14 17:32:32 localhost kubelet[1006]: fsck.ext2: Bad magic number in super-block while trying to open /dev/sda
Aug 14 17:32:32 localhost kubelet[1006]: /dev/sda:
Aug 14 17:32:32 localhost kubelet[1006]: The superblock could not be read or does not describe a valid ext2/ext3/ext4
Aug 14 17:32:32 localhost kubelet[1006]: filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
Aug 14 17:32:32 localhost kubelet[1006]: filesystem (and not swap or ufs or something else), then the superblock
Aug 14 17:32:32 localhost kubelet[1006]: is corrupt, and you might try running e2fsck with an alternate superblock:
Aug 14 17:32:32 localhost kubelet[1006]:     e2fsck -b 8193 <device>
Aug 14 17:32:32 localhost kubelet[1006]:  or
Aug 14 17:32:32 localhost kubelet[1006]:     e2fsck -b 32768 <device>

iscsi の rescan session に失敗。Session が見つからなかったため。 iSCSI ログインにも失敗。

oomichi commented 6 years ago

障害発生時に iscsiadm で状態を確認してみる まとめ

node01

$ sudo iscsiadm -m session
tcp: [20] 192.168.1.1:3260,1 iqn.2010-10.org.openstack:volume-feee5128-4bea-41ac-8026-0f7ecc0f2e11 (non-flash)
$
$ kubectl describe pod ss-0 -n=e2e-tests-statefulset-snkc4
Name:               ss-0
Namespace:          e2e-tests-statefulset-snkc4
Priority:           0
PriorityClassName:  <none>
Node:               k8s-node01/192.168.1.109
Start Time:         Tue, 14 Aug 2018 23:37:46 +0000
Labels:             baz=blah
                    controller-revision-hash=ss-7cf5fb4c86
                    foo=bar
                    statefulset.kubernetes.io/pod-name=ss-0
Annotations:        <none>
Status:             Pending
IP:
Controlled By:      StatefulSet/ss
Containers:
  nginx:
    Container ID:
    Image:          k8s.gcr.io/nginx-slim-amd64:0.20
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Readiness:      exec [test -f /data/statefulset-continue] delay=0s timeout=1s period=1s #success=1 #failure=1
    Environment:    <none>
    Mounts:
      /data/ from datadir (rw)
      /home from home (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-fl59l (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  datadir:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  datadir-ss-0
    ReadOnly:   false
  home:
    Type:          HostPath (bare host directory volume)
    Path:          /tmp/home
    HostPathType:
  default-token-fl59l:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-fl59l
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age               From                 Message
  ----     ------       ----              ----                 -------
  Normal   Scheduled    1m                default-scheduler    Successfully assigned e2e-tests-statefulset-snkc4/ss-0 to k8s-node01
  Warning  FailedMount  28s (x7 over 1m)  kubelet, k8s-node01  MountVolume.WaitForAttach failed for volume "pvc-e1e4bf87-a01a-11e8-a146-fa163e420595" : iscsi: failed to create new iface: iscsiadm: Could not create new interface 192.168.1.1:3260:pvc-e1e4bf87-a01a-11e8-a146-fa163e420595.
 (exit status 15)

master

$ sudo iscsiadm -m session
tcp: [10] 192.168.1.1:3260,1 iqn.2010-10.org.openstack:volume-f1ce486e-5d18-4ecf-9579-30e01e7c8c6e (non-flash)
$ kubectl describe pod ss-1 -n=e2e-tests-statefulset-snkc4
Name:               ss-1
Namespace:          e2e-tests-statefulset-snkc4
Priority:           0
PriorityClassName:  <none>
Node:               k8s-master/192.168.1.108
Start Time:         Tue, 14 Aug 2018 23:37:34 +0000
Labels:             baz=blah
                    controller-revision-hash=ss-7cf5fb4c86
                    foo=bar
                    statefulset.kubernetes.io/pod-name=ss-1
Annotations:        <none>
Status:             Running
IP:                 10.244.0.70
Controlled By:      StatefulSet/ss
Containers:
  nginx:
    Container ID:   docker://94e47aca9ed0b14c6cba30ed150fe257a99c51b87fa54910df0517f1f5575b68
    Image:          k8s.gcr.io/nginx-slim-amd64:0.20
    Image ID:       docker://sha256:69854bafc1214f1a7f88c32f193dd0112e4d89d5bd9da9a85d95d5735acbc397
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Tue, 14 Aug 2018 23:37:41 +0000
    Ready:          False
    Restart Count:  0
    Readiness:      exec [test -f /data/statefulset-continue] delay=0s timeout=1s period=1s #success=1 #failure=1
    Environment:    <none>
    Mounts:
      /data/ from datadir (rw)
      /home from home (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-fl59l (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  datadir:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  datadir-ss-1
    ReadOnly:   false
  home:
    Type:          HostPath (bare host directory volume)
    Path:          /tmp/home
    HostPathType:
  default-token-fl59l:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-fl59l
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age               From                     Message
  ----     ------                  ----              ----                     -------
  Warning  FailedScheduling        4m (x12 over 4m)  default-scheduler        pod has unbound PersistentVolumeClaims (repeated 2 times)
  Normal   Scheduled               4m                default-scheduler        Successfully assigned e2e-tests-statefulset-snkc4/ss-1 to k8s-master
  Normal   SuccessfulAttachVolume  4m                attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-fa694123-a01a-11e8-a146-fa163e420595"
  Normal   Pulled                  4m                kubelet, k8s-master      Container image "k8s.gcr.io/nginx-slim-amd64:0.20" already present on machine
  Normal   Created                 4m                kubelet, k8s-master      Created container
  Normal   Started                 4m                kubelet, k8s-master      Started container
  Warning  Unhealthy               3m (x22 over 4m)  kubelet, k8s-master      Readiness probe failed:
$
$ sudo iscsiadm -m discovery -t sendtargets -p 192.168.1.1
192.168.1.1:3260,1 iqn.2010-10.org.openstack:volume-bcb0de59-a4f9-4901-b869-d02d2dc2f06b
192.168.1.1:3260,1 iqn.2010-10.org.openstack:volume-feee5128-4bea-41ac-8026-0f7ecc0f2e11
192.168.1.1:3260,1 iqn.2010-10.org.openstack:volume-f1ce486e-5d18-4ecf-9579-30e01e7c8c6e
$
$ cinder list
+--------------------------------------+--------+---------------------------------------------------------+------+-------------+----------+-------------+
| ID                                   | Status | Name                                                    | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+---------------------------------------------------------+------+-------------+----------+-------------+
| bcb0de59-a4f9-4901-b869-d02d2dc2f06b | in-use | cinder-dynamic-pvc-e7df489b-9d09-11e8-bcf9-0a580af40102 | 1    | -           | false    | None        |
| f1ce486e-5d18-4ecf-9579-30e01e7c8c6e | in-use | cinder-dynamic-pvc-015bd891-a01b-11e8-b111-0a580af400e8 | 1    | -           | false    | None        |
| feee5128-4bea-41ac-8026-0f7ecc0f2e11 | in-use | cinder-dynamic-pvc-e68a58db-a01a-11e8-b111-0a580af400e8 | 1    | -           | false    | None        |
+--------------------------------------+--------+---------------------------------------------------------+------+-------------+----------+-------------+
$ kubectl get pvc --all-namespaces
NAMESPACE                     NAME           STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
e2e-tests-statefulset-snkc4   datadir-ss-0   Bound     pvc-e1e4bf87-a01a-11e8-a146-fa163e420595   1          RWO            gold           22m
e2e-tests-statefulset-snkc4   datadir-ss-1   Bound     pvc-fa694123-a01a-11e8-a146-fa163e420595   1          RWO            gold           21m
$
$ kubectl describe pvc datadir-ss-0 -n=e2e-tests-statefulset-snkc4
Name:          datadir-ss-0
Namespace:     e2e-tests-statefulset-snkc4
StorageClass:  gold
Status:        Bound
Volume:        pvc-e1e4bf87-a01a-11e8-a146-fa163e420595
Labels:        baz=blah
               foo=bar
Annotations:   cinderVolumeId=feee5128-4bea-41ac-8026-0f7ecc0f2e11
               control-plane.alpha.kubernetes.io/leader={"holderIdentity":"8c563afe-9d63-11e8-b111-0a580af400e8","leaseDurationSeconds":15,"acquireTime":"2018-08-14T23:36:43Z","renewTime":"2018-08-14T23:36:53Z","lea...
               pv.kubernetes.io/bind-completed=yes
               pv.kubernetes.io/bound-by-controller=yes
               volume.beta.kubernetes.io/storage-provisioner=openstack.org/standalone-cinder
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1
Access Modes:  RWO
Events:
  Type    Reason                 Age                From                                                                                                                 Message
  ----    ------                 ----               ----                                                                                                                 -------
  Normal  Provisioning           22m                openstack.org/standalone-cinder standalone-cinder-provisioner-7d6594d789-9mtb9 8c563afe-9d63-11e8-b111-0a580af400e8  External provisioner is provisioning volume for claim "e2e-tests-statefulset-snkc4/datadir-ss-0"
  Normal  ExternalProvisioning   22m (x7 over 22m)  persistentvolume-controller                                                                                          waiting for a volume to be created, either by external provisioner "openstack.org/standalone-cinder" or manually created by system administrator
  Normal  ProvisioningSucceeded  22m                openstack.org/standalone-cinder standalone-cinder-provisioner-7d6594d789-9mtb9 8c563afe-9d63-11e8-b111-0a580af400e8  Successfully provisioned volume pvc-e1e4bf87-a01a-11e8-a146-fa163e420595
$
$ kubectl describe pv pvc-e1e4bf87-a01a-11e8-a146-fa163e420595
Name:            pvc-e1e4bf87-a01a-11e8-a146-fa163e420595
Labels:          <none>
Annotations:     cinderVolumeId=feee5128-4bea-41ac-8026-0f7ecc0f2e11
                 pv.kubernetes.io/provisioned-by=openstack.org/standalone-cinder
                 standaloneCinderProvisionerIdentity=openstack.org/standalone-cinder
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    gold
Status:          Bound
Claim:           e2e-tests-statefulset-snkc4/datadir-ss-0
Reclaim Policy:  Delete
Access Modes:    RWO
Capacity:        1
Node Affinity:   <none>
Message:
Source:
    Type:               ISCSI (an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod)
    TargetPortal:       192.168.1.1:3260
    IQN:                iqn.2010-10.org.openstack:volume-feee5128-4bea-41ac-8026-0f7ecc0f2e11
    Lun:                1
    ISCSIInterface      default
    FSType:
    ReadOnly:           false
    Portals:            []
    DiscoveryCHAPAuth:  false
    SessionCHAPAuth:    true
    SecretRef:          &{pvc-e1e4bf87-a01a-11e8-a146-fa163e420595-secret }
    InitiatorName:      iqn.2018-01.io.k8s:a13fc3d1cc22
Events:                 <none>
oomichi commented 6 years ago

状況

oomichi commented 6 years ago

手で再現させてみる。 → 単に StatefulSets の1つ目の Pod を削除するだけでは再現せず。2つ目の Pod の Unhealthy 設定が必要?

  1. ss を作成(https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/)
  2. 状態確認
    $ kubectl get statefulsets
    NAME      DESIRED   CURRENT   AGE
    web       2         2         1m
    $ kubectl get pods
    NAME                                             READY     STATUS    RESTARTS   AGE
    web-0                                            1/1       Running   0          1m
    web-1                                            1/1       Running   0          47s
    $ kubectl get pods -o=wide
    NAME                                             READY     STATUS    RESTARTS   AGE       IP             NODE
    web-0                                            1/1       Running   0          6m        10.244.1.167   k8s-node01
    web-1                                            1/1       Running   0          5m        10.244.0.71    k8s-master
    $ kubectl get pvc
    NAME        STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    www-web-0   Bound     pvc-51c69569-a022-11e8-a146-fa163e420595   1Gi        RWO            gold           2m
    www-web-1   Bound     pvc-66f42404-a022-11e8-a146-fa163e420595   1Gi        RWO            gold           1m
    $ kubectl get pv
    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM               STORAGECLASS   REASON    AGE
    pvc-51c69569-a022-11e8-a146-fa163e420595   1Gi        RWO            Delete           Bound     default/www-web-0   gold                     8m
    pvc-66f42404-a022-11e8-a146-fa163e420595   1Gi        RWO            Delete           Bound     default/www-web-1   gold                     8m
    $ kubectl describe pv pvc-51c69569-a022-11e8-a146-fa163e420595
    Name:            pvc-51c69569-a022-11e8-a146-fa163e420595
    Labels:          <none>
    Annotations:     cinderVolumeId=8168a06f-4522-42c5-849d-9a38287b4869
                 pv.kubernetes.io/provisioned-by=openstack.org/standalone-cinder
                 standaloneCinderProvisionerIdentity=openstack.org/standalone-cinder
    Finalizers:      [kubernetes.io/pv-protection]
    StorageClass:    gold
    Status:          Bound
    Claim:           default/www-web-0
    Reclaim Policy:  Delete
    Access Modes:    RWO
    Capacity:        1Gi
    Node Affinity:   <none>
    Message:
    Source:
    Type:               ISCSI (an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod)
    TargetPortal:       192.168.1.1:3260
    IQN:                iqn.2010-10.org.openstack:volume-8168a06f-4522-42c5-849d-9a38287b4869
    Lun:                1
    ISCSIInterface      default
    FSType:
    ReadOnly:           false
    Portals:            []
    DiscoveryCHAPAuth:  false
    SessionCHAPAuth:    true
    SecretRef:          &{pvc-51c69569-a022-11e8-a146-fa163e420595-secret }
    InitiatorName:      iqn.2018-01.io.k8s:a13fc3d1cc22
    Events:                 <none>
    $ cinder list
    +--------------------------------------+--------+---------------------------------------------------------+------+-------------+----------+-------------+
    |                  ID                  | Status |                           Name                          | Size | Volume Type | Bootable | Attached to |
    +--------------------------------------+--------+---------------------------------------------------------+------+-------------+----------+-------------+
    | 64bf2ed1-a630-4c2e-8398-c0a853728e4e | in-use | cinder-dynamic-pvc-67093a74-a022-11e8-b111-0a580af400e8 |  1   |      -      |  false   |     None    |
    | 8168a06f-4522-42c5-849d-9a38287b4869 | in-use | cinder-dynamic-pvc-56f9f656-a022-11e8-b111-0a580af400e8 |  1   |      -      |  false   |     None    |
  3. iscsi の状態確認 → web-0 が k8s-node01 に存在し、PV pvc-51c69569-a022-11e8-a146-fa163e420595 と kubelet がつながっていることを確認
    ubuntu@k8s-node01:~$ sudo iscsiadm -m session
    tcp: [21] 192.168.1.1:3260,1 iqn.2010-10.org.openstack:volume-8168a06f-4522-42c5-849d-9a38287b4869 (non-flash)
  4. web-0 を削除してみる → 問題なく起動する
    $ kubectl delete pod web-0
    $ kubectl get pods
    NAME                                             READY     STATUS              RESTARTS   AGE
    web-0                                            0/1       ContainerCreating   0          10s
    web-1                                            1/1       Running             0          13m
    $
    ...
    $ kubectl get pods -o=wide
    NAME                                             READY     STATUS    RESTARTS   AGE       IP             NODE
    web-0                                            1/1       Running   0          52s       10.244.1.168   k8s-node01
    web-1                                            1/1       Running   0          14m       10.244.0.71    k8s-master
  5. k8s-node01 も同じ iscsi セッションを保持
    ubuntu@k8s-node01:~$ sudo iscsiadm -m session
    tcp: [22] 192.168.1.1:3260,1 iqn.2010-10.org.openstack:volume-8168a06f-4522-42c5-849d-9a38287b4869 (non-flash)
oomichi commented 6 years ago

試しに index: 1に対する Resume を Deleting 前に実施してみて、その影響なのかを確認する。

--- a/test/e2e/apps/statefulset.go
+++ b/test/e2e/apps/statefulset.go
@@ -231,6 +231,9 @@ var _ = SIGDescribe("StatefulSet", func() {
                        By("Waiting for stateful pod at index 1 to enter running.")
                        sst.WaitForRunning(2, 1, ss)

+                       By("Resuming stateful pod at index 1.")
+                       sst.ResumeNextPod(ss)
+
                        // Now we have 1 healthy and 1 unhealthy stateful pod. Deleting the healthy stateful pod should *not*
                        // create a new stateful pod till the remaining stateful pod becomes healthy, which won't happen till
                        // we set the healthy bit.
@@ -241,9 +244,6 @@ var _ = SIGDescribe("StatefulSet", func() {
                        By("Confirming stateful pod at index 0 is recreated.")
                        sst.WaitForRunning(2, 1, ss)

-                       By("Resuming stateful pod at index 1.")
-                       sst.ResumeNextPod(ss)
-
                        By("Confirming all stateful pods in statefulset are created.")
                        sst.WaitForRunningAndReady(*ss.Spec.Replicas, ss)
                })
oomichi commented 5 years ago

kube-scheduler Podの異常が原因。 クリーンデプロイにより、解消。