oomichi / try-kubernetes

12 stars 5 forks source link

[It] should support existing directories when readOnly specified in the volumeSource #49

Closed oomichi closed 5 years ago

oomichi commented 5 years ago

まとめ

STEP: Creating a PVC followed by a PV
Sep 19 21:17:53.830: INFO: Waiting for PV nfs-gcppr to bind to PVC pvc-kh5xq
Sep 19 21:17:53.830: INFO: Waiting up to 3m0s for PersistentVolumeClaim pvc-kh5xq to have phase Bound
Sep 19 21:17:53.834: INFO: PersistentVolumeClaim pvc-kh5xq found but phase is Pending instead of Bound.
Sep 19 21:17:55.845: INFO: PersistentVolumeClaim pvc-kh5xq found but phase is Pending instead of Bound.
Sep 19 21:17:57.857: INFO: PersistentVolumeClaim pvc-kh5xq found but phase is Pending instead of Bound.
Sep 19 21:17:59.869: INFO: PersistentVolumeClaim pvc-kh5xq found and phase=Bound (6.039457536s)
Sep 19 21:17:59.869: INFO: Waiting up to 3m0s for PersistentVolume nfs-gcppr to have phase Bound
Sep 19 21:17:59.883: INFO: PersistentVolume nfs-gcppr found and phase=Bound (13.26974ms)
[It] should support existing directories when readOnly specified in the volumeSource
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:307
STEP: Creating pod to write volume content pod-subpath-test-nfspvc-f47f
STEP: Creating a pod to test subpath
Sep 19 21:17:59.929: INFO: Waiting up to 5m0s for pod "pod-subpath-test-nfspvc-f47f" in namespace "e2e-tests-subpath-zrdp6" to be "success or failure"
Sep 19 21:17:59.945: INFO: Pod "pod-subpath-test-nfspvc-f47f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.530343ms
Sep 19 21:18:01.961: INFO: Pod "pod-subpath-test-nfspvc-f47f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031621855s
...
Sep 19 21:22:57.838: INFO: Pod "pod-subpath-test-nfspvc-f47f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.909506018s
Sep 19 21:22:59.851: INFO: Pod "pod-subpath-test-nfspvc-f47f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.922088849s
Sep 19 21:23:01.868: INFO: Failed to get logs from node "k8s-node01" pod "pod-subpath-test-nfspvc-f47f" container "test-container-subpath-nfspvc-f47f": the server rejected our request for an unknown reason (get pods pod-subpath-test-nfspvc-f47f)
Sep 19 21:23:01.877: INFO: Failed to get logs from node "k8s-node01" pod "pod-subpath-test-nfspvc-f47f" container "test-container-volume-nfspvc-f47f": the server rejected our request for an unknown reason (get pods pod-subpath-test-nfspvc-f47f)
STEP: delete the pod
Sep 19 21:23:01.884: INFO: Waiting for pod pod-subpath-test-nfspvc-f47f to disappear
Sep 19 21:23:01.889: INFO: Pod pod-subpath-test-nfspvc-f47f still exists
Sep 19 21:23:03.890: INFO: Waiting for pod pod-subpath-test-nfspvc-f47f to disappear
Sep 19 21:23:03.895: INFO: Pod pod-subpath-test-nfspvc-f47f still exists
Sep 19 21:23:05.890: INFO: Waiting for pod pod-subpath-test-nfspvc-f47f to disappear
Sep 19 21:23:05.904: INFO: Pod pod-subpath-test-nfspvc-f47f no longer exists
Sep 19 21:23:05.904: INFO: Unexpected error occurred: expected pod "pod-subpath-test-nfspvc-f47f" success: Gave up after waiting 5m0s for pod "pod-subpath-test-nfspvc-f47f" to be "success or failure"
[AfterEach] [Volume type: nfsPVC]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:161
STEP: Deleting pod
Sep 19 21:23:05.907: INFO: Deleting pod "pod-subpath-test-nfspvc-f47f" in namespace "e2e-tests-subpath-zrdp6"
STEP: Cleaning up volume
Sep 19 21:23:05.914: INFO: Deleting PersistentVolumeClaim "pvc-kh5xq"
Sep 19 21:23:05.927: INFO: Deleting PersistentVolume "nfs-gcppr"
Sep 19 21:23:05.933: INFO: Deleting pod "nfs-server" in namespace "e2e-tests-subpath-zrdp6"
Sep 19 21:23:05.942: INFO: Wait up to 5m0s for pod "nfs-server" to be fully deleted
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
STEP: Collecting events from namespace "e2e-tests-subpath-zrdp6".
STEP: Found 17 events.
Sep 19 21:23:15.975: INFO: At 2018-09-19 21:17:49 +0000 UTC - event for nfs-server: {default-scheduler } Scheduled: Successfully assigned e2e-tests-subpath-zrdp6/nfs-server to k8s-node01
Sep 19 21:23:15.975: INFO: At 2018-09-19 21:17:52 +0000 UTC - event for nfs-server: {kubelet k8s-node01} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/volume-nfs:0.8" already present on machine
Sep 19 21:23:15.975: INFO: At 2018-09-19 21:17:52 +0000 UTC - event for nfs-server: {kubelet k8s-node01} Created: Created container
Sep 19 21:23:15.975: INFO: At 2018-09-19 21:17:52 +0000 UTC - event for nfs-server: {kubelet k8s-node01} Started: Started container
Sep 19 21:23:15.975: INFO: At 2018-09-19 21:17:53 +0000 UTC - event for pvc-kh5xq: {persistentvolume-controller } ProvisioningFailed: storageclass.storage.k8s.io "e2e-tests-subpath-zrdp6" not found
Sep 19 21:23:15.975: INFO: At 2018-09-19 21:17:59 +0000 UTC - event for pod-subpath-test-nfspvc-f47f: {default-scheduler } Scheduled: Successfully assigned e2e-tests-subpath-zrdp6/pod-subpath-test-nfspvc-f47f to k8s-node01
Sep 19 21:23:15.975: INFO: At 2018-09-19 21:18:00 +0000 UTC - event for pod-subpath-test-nfspvc-f47f: {kubelet k8s-node01} FailedMount: MountVolume.SetUp failed for volume "nfs-gcppr" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/7c537d37-bc51-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-gcppr --scope -- mount -t nfs 10.244.1.117:/exports /var/lib/kubelet/pods/7c537d37-bc51-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-gcppr
Output: Running scope as unit run-r114b0f782e7b49c2b4eb01f1fe08d56b.scope.
mount: wrong fs type, bad option, bad superblock on 10.244.1.117:/exports,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

Sep 19 21:23:15.975: INFO: At 2018-09-19 21:18:00 +0000 UTC - event for pod-subpath-test-nfspvc-f47f: {kubelet k8s-node01} FailedMount: MountVolume.SetUp failed for volume "nfs-gcppr" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/7c537d37-bc51-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-gcppr --scope -- mount -t nfs 10.244.1.117:/exports /var/lib/kubelet/pods/7c537d37-bc51-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-gcppr
Output: Running scope as unit run-r3952e22ef17a46ebb658f354bece0dfe.scope.
mount: wrong fs type, bad option, bad superblock on 10.244.1.117:/exports,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

Sep 19 21:23:15.975: INFO: At 2018-09-19 21:18:01 +0000 UTC - event for pod-subpath-test-nfspvc-f47f: {kubelet k8s-node01} FailedMount: MountVolume.SetUp failed for volume "nfs-gcppr" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/7c537d37-bc51-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-gcppr --scope -- mount -t nfs 10.244.1.117:/exports /var/lib/kubelet/pods/7c537d37-bc51-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-gcppr
Output: Running scope as unit run-r95c78d8461434f6c970aff1b6e3d59cc.scope.
mount: wrong fs type, bad option, bad superblock on 10.244.1.117:/exports,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

Sep 19 21:23:15.975: INFO: At 2018-09-19 21:18:03 +0000 UTC - event for pod-subpath-test-nfspvc-f47f: {kubelet k8s-node01} FailedMount: MountVolume.SetUp failed for volume "nfs-gcppr" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/7c537d37-bc51-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-gcppr --scope -- mount -t nfs 10.244.1.117:/exports /var/lib/kubelet/pods/7c537d37-bc51-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-gcppr
Output: Running scope as unit run-rbde00292bc5942109c834dc82867697f.scope.
mount: wrong fs type, bad option, bad superblock on 10.244.1.117:/exports,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

Sep 19 21:23:15.975: INFO: At 2018-09-19 21:18:08 +0000 UTC - event for pod-subpath-test-nfspvc-f47f: {kubelet k8s-node01} FailedMount: MountVolume.SetUp failed for volume "nfs-gcppr" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/7c537d37-bc51-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-gcppr --scope -- mount -t nfs 10.244.1.117:/exports /var/lib/kubelet/pods/7c537d37-bc51-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-gcppr
Output: Running scope as unit run-r08fe17b7518544779efbc2a4d4b91f8b.scope.
mount: wrong fs type, bad option, bad superblock on 10.244.1.117:/exports,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

Sep 19 21:23:15.975: INFO: At 2018-09-19 21:18:16 +0000 UTC - event for pod-subpath-test-nfspvc-f47f: {kubelet k8s-node01} FailedMount: MountVolume.SetUp failed for volume "nfs-gcppr" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/7c537d37-bc51-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-gcppr --scope -- mount -t nfs 10.244.1.117:/exports /var/lib/kubelet/pods/7c537d37-bc51-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-gcppr
Output: Running scope as unit run-r69a3cd5ce6444eb39ef219df043e6063.scope.
mount: wrong fs type, bad option, bad superblock on 10.244.1.117:/exports,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

Sep 19 21:23:15.975: INFO: At 2018-09-19 21:18:32 +0000 UTC - event for pod-subpath-test-nfspvc-f47f: {kubelet k8s-node01} FailedMount: MountVolume.SetUp failed for volume "nfs-gcppr" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/7c537d37-bc51-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-gcppr --scope -- mount -t nfs 10.244.1.117:/exports /var/lib/kubelet/pods/7c537d37-bc51-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-gcppr
Output: Running scope as unit run-rdd024fc59683443aa3f4c9ff3ba839f4.scope.
mount: wrong fs type, bad option, bad superblock on 10.244.1.117:/exports,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

Sep 19 21:23:15.975: INFO: At 2018-09-19 21:19:04 +0000 UTC - event for pod-subpath-test-nfspvc-f47f: {kubelet k8s-node01} FailedMount: MountVolume.SetUp failed for volume "nfs-gcppr" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/7c537d37-bc51-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-gcppr --scope -- mount -t nfs 10.244.1.117:/exports /var/lib/kubelet/pods/7c537d37-bc51-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-gcppr
Output: Running scope as unit run-rd787a3bde3ff4f2b81db7c643b766a97.scope.
mount: wrong fs type, bad option, bad superblock on 10.244.1.117:/exports,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

Sep 19 21:23:15.975: INFO: At 2018-09-19 21:20:02 +0000 UTC - event for pod-subpath-test-nfspvc-f47f: {kubelet k8s-node01} FailedMount: Unable to mount volumes for pod "pod-subpath-test-nfspvc-f47f_e2e-tests-subpath-zrdp6(7c537d37-bc51-11e8-a146-fa163e420595)": timeout expired waiting for volumes to attach or mount for pod "e2e-tests-subpath-zrdp6"/"pod-subpath-test-nfspvc-f47f". list of unmounted volumes=[test-volume]. list of unattached volumes=[test-volume liveness-probe-volume default-token-hzwft]
Sep 19 21:23:15.975: INFO: At 2018-09-19 21:20:08 +0000 UTC - event for pod-subpath-test-nfspvc-f47f: {kubelet k8s-node01} FailedMount: (combined from similar events): MountVolume.SetUp failed for volume "nfs-gcppr" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/7c537d37-bc51-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-gcppr --scope -- mount -t nfs 10.244.1.117:/exports /var/lib/kubelet/pods/7c537d37-bc51-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-gcppr
Output: Running scope as unit run-r38b59d1b4f9b4cb9b5ec8695f952acca.scope.
mount: wrong fs type, bad option, bad superblock on 10.244.1.117:/exports,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

Sep 19 21:23:15.975: INFO: At 2018-09-19 21:23:08 +0000 UTC - event for nfs-server: {kubelet k8s-node01} Killing: Killing container with id docker://nfs-server:Need to kill Pod
Sep 19 21:23:15.998: INFO: POD                                             NODE        PHASE    GRACE  CONDITIONS
Sep 19 21:23:15.998: INFO: standalone-cinder-provisioner-7d6594d789-9mtb9  k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:04 +0000 UTC  }]
Sep 19 21:23:15.999: INFO: nfs-server                                      k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-19 21:15:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-19 21:15:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-19 21:15:53 +0000 UTC  }]
Sep 19 21:23:15.999: INFO: pod-subpath-test-nfs-7gfr                       k8s-node01  Pending         [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2018-09-19 21:15:57 +0000 UTC ContainersNotInitialized containers with incomplete status: [init-volume-nfs-7gfr]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-09-19 21:15:57 +0000 UTC ContainersNotReady containers with unready status: [test-container-subpath-nfs-7gfr test-container-volume-nfs-7gfr]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [test-container-subpath-nfs-7gfr test-container-volume-nfs-7gfr]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-19 21:15:57 +0000 UTC  }]
Sep 19 21:23:16.000: INFO: coredns-78fcdf6894-xx76v                        k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 09:12:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 09:12:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 09:12:19 +0000 UTC  }]
Sep 19 21:23:16.000: INFO: coredns-78fcdf6894-zmpph                        k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 06:49:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 06:49:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 06:49:24 +0000 UTC  }]
Sep 19 21:23:16.000: INFO: etcd-k8s-master                                 k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  }]
Sep 19 21:23:16.001: INFO: kube-apiserver-k8s-master                       k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 01:50:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  }]
Sep 19 21:23:16.001: INFO: kube-controller-manager-k8s-master              k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 22:05:28 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 22:05:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 22:05:28 +0000 UTC  }]
Sep 19 21:23:16.002: INFO: kube-flannel-ds-7df6r                           k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:12:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:22 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-17 17:12:31 +0000 UTC  }]
Sep 19 21:23:16.002: INFO: kube-flannel-ds-tllws                           k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 09:12:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-21 23:55:37 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 09:12:53 +0000 UTC  }]
Sep 19 21:23:16.002: INFO: kube-proxy-hxp7z                                k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-21 23:55:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:51 +0000 UTC  }]
Sep 19 21:23:16.003: INFO: kube-proxy-zwrl4                                k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:37 +0000 UTC  }]
Sep 19 21:23:16.003: INFO: kube-scheduler-k8s-master                       k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-22 02:30:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-09-19 21:18:31 +0000 UTC ContainersNotReady containers with unready status: [kube-scheduler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [kube-scheduler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-22 02:30:48 +0000 UTC  }]
Sep 19 21:23:16.004: INFO: metrics-server-86bd9d7667-twb2r                 k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:45:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:45:39 +0000 UTC  }]
Sep 19 21:23:16.004: INFO:
Sep 19 21:23:16.009: INFO:
Logging node info for node k8s-master
Sep 19 21:23:16.014: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-master,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-master,UID:94f19db7-89e3-11e8-b234-fa163e420595,ResourceVersion:6381135,Generation:0,CreationTimestamp:2018-07-17 17:05:18 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: k8s-master,node-role.kubernetes.io/master: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"06:0e:73:28:c3:b1"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 192.168.1.108,kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{41567956992 0} {<nil>} 40593708Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4143394816 0} {<nil>} 4046284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37411161231 0} {<nil>} 37411161231 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4038537216 0} {<nil>} 3943884Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-09-19 21:23:10 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-09-19 21:23:10 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-09-19 21:23:10 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-09-19 21:23:10 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-09-19 21:23:10 +0000 UTC 2018-07-31 23:04:27 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 192.168.1.108} {Hostname k8s-master}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1db2c06c39a54cd3a93a4e0a44823fd6,SystemUUID:1DB2C06C-39A5-4CD3-A93A-4E0A44823FD6,BootID:d2b66fba-cf4e-4205-b596-3ffb4e579c16,KernelVersion:4.4.0-130-generic,OSImage:Ubuntu 16.04.5 LTS,ContainerRuntimeVersion:docker://1.11.2,KubeletVersion:v1.11.1,KubeProxyVersion:v1.11.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[golang:1.10] 793901893} {[gcr.io/google-samples/gb-frontend-amd64:v5] 373099368} {[k8s.gcr.io/etcd-amd64:3.2.18] 218904307} {[k8s.gcr.io/kube-apiserver-amd64:v1.11.1] 186675825} {[k8s.gcr.io/kube-apiserver-amd64:v1.11.0] 186617744} {[k8s.gcr.io/kube-controller-manager-amd64:v1.11.1] 155252555} {[k8s.gcr.io/kube-controller-manager-amd64:v1.11.0] 155203118} {[k8s.gcr.io/nginx-slim:0.8] 110487599} {[nginx:latest] 108975101} {[k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google-samples/gb-redisslave-amd64:v2] 98945667} {[k8s.gcr.io/kube-proxy-amd64:v1.11.1] 97776424} {[k8s.gcr.io/kube-proxy-amd64:v1.11.0] 97772373} {[k8s.gcr.io/echoserver:1.10] 95361986} {[k8s.gcr.io/nginx-slim-amd64:0.21] 95339966} {[k8s.gcr.io/kube-scheduler-amd64:v1.11.1] 56781436} {[k8s.gcr.io/kube-scheduler-amd64:v1.11.0] 56757023} {[quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/kubernetes-e2e-test-images/resource-consumer-amd64:1.3 gcr.io/kubernetes-e2e-test-images/resource-consumer:1.3] 49707607} {[quay.io/k8scsi/csi-attacher:v0.2.0] 45644524} {[k8s.gcr.io/coredns:1.1.3] 45587362} {[quay.io/k8scsi/csi-provisioner:v0.2.1] 45078229} {[gcr.io/google_containers/metrics-server-amd64:v0.2.1] 42541759} {[quay.io/k8scsi/driver-registrar:v0.2.0] 42385441} {[k8scloudprovider/cinder-provisioner:latest] 29292916} {[quay.io/k8scsi/hostpathplugin:v0.2.0] 17287699} {[gcr.io/kubernetes-e2e-test-images/net-amd64:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6713741} {[gcr.io/kubernetes-e2e-test-images/redis-amd64:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/resource-consumer/controller-amd64:1.0] 5902947} {[gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.0] 5470001} {[gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten-amd64:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0] 1563521} {[busybox:latest] 1162769} {[k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Sep 19 21:23:16.016: INFO:
Logging kubelet events for node k8s-master
Sep 19 21:23:16.022: INFO:
Logging pods the kubelet thinks is on node k8s-master
Sep 19 21:23:16.032: INFO: coredns-78fcdf6894-xx76v started at 2018-08-17 09:12:19 +0000 UTC (0+1 container statuses recorded)
Sep 19 21:23:16.032: INFO:      Container coredns ready: true, restart count 0
Sep 19 21:23:16.032: INFO: metrics-server-86bd9d7667-twb2r started at 2018-08-03 08:45:39 +0000 UTC (0+1 container statuses recorded)
Sep 19 21:23:16.032: INFO:      Container metrics-server ready: true, restart count 1
Sep 19 21:23:16.032: INFO: kube-controller-manager-k8s-master started at <nil> (0+0 container statuses recorded)
Sep 19 21:23:16.032: INFO: coredns-78fcdf6894-zmpph started at 2018-08-17 06:49:24 +0000 UTC (0+1 container statuses recorded)
Sep 19 21:23:16.032: INFO:      Container coredns ready: true, restart count 0
Sep 19 21:23:16.032: INFO: kube-flannel-ds-7df6r started at 2018-07-17 17:12:31 +0000 UTC (1+1 container statuses recorded)
Sep 19 21:23:16.032: INFO:      Init container install-cni ready: true, restart count 6
Sep 19 21:23:16.032: INFO:      Container kube-flannel ready: true, restart count 6
Sep 19 21:23:16.032: INFO: kube-proxy-zwrl4 started at 2018-07-31 23:08:37 +0000 UTC (0+1 container statuses recorded)
Sep 19 21:23:16.032: INFO:      Container kube-proxy ready: true, restart count 6
Sep 19 21:23:16.032: INFO: standalone-cinder-provisioner-7d6594d789-9mtb9 started at 2018-08-11 12:39:04 +0000 UTC (0+1 container statuses recorded)
Sep 19 21:23:16.032: INFO:      Container standalone-cinder-provisioner ready: true, restart count 0
Sep 19 21:23:16.032: INFO: kube-apiserver-k8s-master started at <nil> (0+0 container statuses recorded)
Sep 19 21:23:16.032: INFO: kube-scheduler-k8s-master started at <nil> (0+0 container statuses recorded)
Sep 19 21:23:16.032: INFO: etcd-k8s-master started at <nil> (0+0 container statuses recorded)
Sep 19 21:23:16.098: INFO:
Latency metrics for node k8s-master
Sep 19 21:23:16.098: INFO:
Logging node info for node k8s-node01
Sep 19 21:23:16.102: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-node01,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-node01,UID:980d8d67-9515-11e8-a804-fa163e420595,ResourceVersion:6381139,Generation:0,CreationTimestamp:2018-07-31 23:01:01 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/zone: ,kubernetes.io/hostname: k8s-node01,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"96:5e:9d:88:d2:c5"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 192.168.1.109,kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{41567956992 0} {<nil>} 40593708Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4143386624 0} {<nil>} 4046276Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37411161231 0} {<nil>} 37411161231 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4038529024 0} {<nil>} 3943876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-09-19 21:23:13 +0000 UTC 2018-08-10 00:17:13 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-09-19 21:23:13 +0000 UTC 2018-08-10 00:17:13 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-09-19 21:23:13 +0000 UTC 2018-08-10 00:17:13 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-09-19 21:23:13 +0000 UTC 2018-07-31 23:01:01 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-09-19 21:23:13 +0000 UTC 2018-08-21 23:55:45 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 192.168.1.109} {Hostname k8s-node01}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:817a385b9de241668e47cd87cda24f47,SystemUUID:817A385B-9DE2-4166-8E47-CD87CDA24F47,BootID:1e0759ec-378d-4817-9916-4a967bfc2521,KernelVersion:4.4.0-133-generic,OSImage:Ubuntu 16.04.4 LTS,ContainerRuntimeVersion:docker://1.11.2,KubeletVersion:v1.11.1,KubeProxyVersion:v1.11.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[humblec/glusterdynamic-provisioner:v1.0] 373281573} {[gcr.io/google-samples/gb-frontend-amd64:v5] 373099368} {[quay.io/kubernetes_incubator/nfs-provisioner:v1.0.9] 332415371} {[gcr.io/kubernetes-e2e-test-images/volume-nfs:0.8] 247157334} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils-amd64:1.0] 195659796} {[k8s.gcr.io/resource_consumer:beta] 132805424} {[k8s.gcr.io/nginx-slim:0.8] 110487599} {[nginx:latest] 108975101} {[k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google-samples/gb-redisslave-amd64:v2] 98945667} {[k8s.gcr.io/kube-proxy-amd64:v1.11.1] 97776424} {[k8s.gcr.io/kube-proxy-amd64:v1.11.0] 97772373} {[k8s.gcr.io/echoserver:1.10] 95361986} {[k8s.gcr.io/nginx-slim-amd64:0.21] 95339966} {[quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/kubernetes-e2e-test-images/resource-consumer-amd64:1.3 gcr.io/kubernetes-e2e-test-images/resource-consumer:1.3] 49707607} {[quay.io/k8scsi/csi-attacher:v0.2.0] 45644524} {[k8s.gcr.io/coredns:1.1.3] 45587362} {[quay.io/k8scsi/csi-provisioner:v0.2.1] 45078229} {[gcr.io/google_containers/metrics-server-amd64:v0.2.1] 42541759} {[quay.io/k8scsi/driver-registrar:v0.2.0] 42385441} {[k8scloudprovider/cinder-provisioner:latest] 28582964} {[gcr.io/kubernetes-e2e-test-images/nettest-amd64:1.0] 27413498} {[quay.io/k8scsi/hostpathplugin:v0.2.0] 17287699} {[gcr.io/kubernetes-e2e-test-images/net-amd64:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/dnsutils-amd64:1.0] 9030162} {[gcr.io/kubernetes-e2e-test-images/hostexec-amd64:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6713741} {[gcr.io/kubernetes-e2e-test-images/redis-amd64:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/resource-consumer/controller-amd64:1.0] 5902947} {[gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.0] 5470001} {[gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten-amd64:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver-amd64:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter-amd64:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness-amd64:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/fakegitserver-amd64:1.0] 4608683} {[k8s.gcr.io/k8s-dns-dnsmasq-amd64:1.14.5] 4324973} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester-amd64:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/port-forward-tester-amd64:1.0] 1992230} {[gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user-amd64:1.0] 1450451} {[busybox:latest] 1162769} {[k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Sep 19 21:23:16.102: INFO:
Logging kubelet events for node k8s-node01
Sep 19 21:23:16.106: INFO:
Logging pods the kubelet thinks is on node k8s-node01
Sep 19 21:23:16.114: INFO: kube-flannel-ds-tllws started at 2018-08-17 09:12:53 +0000 UTC (1+1 container statuses recorded)
Sep 19 21:23:16.114: INFO:      Init container install-cni ready: true, restart count 1
Sep 19 21:23:16.114: INFO:      Container kube-flannel ready: true, restart count 1
Sep 19 21:23:16.114: INFO: nfs-server started at 2018-09-19 21:15:53 +0000 UTC (0+1 container statuses recorded)
Sep 19 21:23:16.114: INFO:      Container nfs-server ready: true, restart count 0
Sep 19 21:23:16.114: INFO: pod-subpath-test-nfs-7gfr started at 2018-09-19 21:15:57 +0000 UTC (1+2 container statuses recorded)
Sep 19 21:23:16.114: INFO:      Init container init-volume-nfs-7gfr ready: false, restart count 0
Sep 19 21:23:16.114: INFO:      Container test-container-subpath-nfs-7gfr ready: false, restart count 0
Sep 19 21:23:16.114: INFO:      Container test-container-volume-nfs-7gfr ready: false, restart count 0
Sep 19 21:23:16.115: INFO: kube-proxy-hxp7z started at 2018-07-31 23:08:51 +0000 UTC (0+1 container statuses recorded)
Sep 19 21:23:16.115: INFO:      Container kube-proxy ready: true, restart count 3
Sep 19 21:23:16.168: INFO:
Latency metrics for node k8s-node01
Sep 19 21:23:16.168: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:2m3.013891s}
Sep 19 21:23:16.169: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:2m3.013891s}
Sep 19 21:23:16.169: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:2m3.013891s}
Sep 19 21:23:16.169: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.9 Latency:2m3.001301s}
Sep 19 21:23:16.169: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.99 Latency:2m3.001301s}
Sep 19 21:23:16.169: INFO: {Operation:sync Method:pod_worker_latency_microseconds Quantile:0.5 Latency:2m3.000992s}
STEP: Dumping a list of prepulled images on each node...
Sep 19 21:23:16.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-zrdp6" for this suite.
Sep 19 21:23:22.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 19 21:23:22.300: INFO: namespace: e2e-tests-subpath-zrdp6, resource: bindings, ignored listing per whitelist
Sep 19 21:23:22.359: INFO: namespace e2e-tests-subpath-zrdp6 deletion completed in 6.174410988s

~ Failure [332.672 seconds]
[sig-storage] Subpath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Volume type: nfsPVC]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:148
    should support existing directories when readOnly specified in the volumeSource [It]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:307

    Expected error:
        <*errors.errorString | 0xc420da8d90>: {
            s: "expected pod \"pod-subpath-test-nfspvc-f47f\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-nfspvc-f47f\" to be \"success or failure\"",
        }
        expected pod "pod-subpath-test-nfspvc-f47f" success: Gave up after waiting 5m0s for pod "pod-subpath-test-nfspvc-f47f" to be "success or failure"
    not to have occurred

    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2325
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath [Volume type: emptyDir]
  should support existing directories when readOnly specified in the volumeSource
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:307
[BeforeEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Sep 19 21:23:22.360: INFO: >>> kubeConfig: /home/ubuntu/admin.conf
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [Volume type: emptyDir]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:149
STEP: Initializing emptyDir volume
[It] should support existing directories when readOnly specified in the volumeSource
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:307
Sep 19 21:23:22.432: INFO: Volume type emptyDir doesn't support readOnly source
[AfterEach] [Volume type: emptyDir]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:161
STEP: Deleting pod
Sep 19 21:23:22.432: INFO: Deleting pod "pod-subpath-test-emptydir-4w77" in namespace "e2e-tests-subpath-fzhbs"
STEP: Cleaning up volume
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Sep 19 21:23:22.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-fzhbs" for this suite.
Sep 19 21:23:28.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 19 21:23:28.558: INFO: namespace: e2e-tests-subpath-fzhbs, resource: bindings, ignored listing per whitelist
Sep 19 21:23:28.573: INFO: namespace e2e-tests-subpath-fzhbs deletion completed in 6.135414899s
oomichi commented 5 years ago
2313 // testContainerOutputMatcher runs the given pod in the given namespace and waits
2314 // for all of the containers in the podSpec to move into the 'Success' status, and tests
2315 // the specified container log against the given expected output using the given matcher.
2316 func (f *Framework) testContainerOutputMatcher(scenarioName string,
2317         pod *v1.Pod,
2318         containerIndex int,
2319         expectedOutput []string,
2320         matcher func(string, ...interface{}) gomegatypes.GomegaMatcher) {
2321         By(fmt.Sprintf("Creating a pod to test %v", scenarioName))
                             ★このメッセージは出力され、その後この関数内部で失敗している
2322         if containerIndex < 0 || containerIndex >= len(pod.Spec.Containers) {
2323                 Failf("Invalid container index: %d", containerIndex)
2324         }
2325         ExpectNoError(f.MatchContainerOutput(pod, pod.Spec.Containers[containerIndex].Name, expectedOutput, matcher))
2326 }

上記コードは下記から呼ばれる。

 653 func initVolumeContent(f *framework.Framework, pod *v1.Pod, volumeFilepath, subpathFilepath string) {
 654         setWriteCommand(volumeFilepath, &pod.Spec.Containers[1])
 655         setReadCommand(subpathFilepath, &pod.Spec.Containers[0])
 656
 657         By(fmt.Sprintf("Creating pod to write volume content %s", pod.Name))
 658         f.TestContainerOutput("subpath", pod, 0, []string{
 659                 "content of file \"" + subpathFilepath + "\": mount-tester new file",
 660         })

更に上記コードは問題のテストから呼ばれる

 307                         It("should support existing directories when readOnly specified in the volumeSource", func() {
 308                                 roVol := vol.getReadOnlyVolumeSpec()
 309                                 if roVol == nil {
 310                                         framework.Skipf("Volume type %v doesn't support readOnly source", curVolType)
 311                                 }
 312
 313                                 // Initialize content in the volume while it's writable
 314                                 initVolumeContent(f, pod, filePathInVolume, filePathInSubpath)

問題のログ

Sep 19 21:17:59.929: INFO: Waiting up to 5m0s for pod "pod-subpath-test-nfspvc-f47f" in namespace "e2e-tests-subpath-zrdp6" to be "success or failure"
Sep 19 21:17:59.945: INFO: Pod "pod-subpath-test-nfspvc-f47f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.530343ms
Sep 19 21:18:01.961: INFO: Pod "pod-subpath-test-nfspvc-f47f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031621855s
...
Sep 19 21:22:57.838: INFO: Pod "pod-subpath-test-nfspvc-f47f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.909506018s
Sep 19 21:22:59.851: INFO: Pod "pod-subpath-test-nfspvc-f47f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.922088849s
Sep 19 21:23:01.868: INFO: Failed to get logs from node "k8s-node01" pod "pod-subpath-test-nfspvc-f47f" container "test-container-subpath-nfspvc-f47f": the server rejected our request for an unknown reason (get pods pod-subpath-test-nfspvc-f47f)
Sep 19 21:23:01.877: INFO: Failed to get logs from node "k8s-node01" pod "pod-subpath-test-nfspvc-f47f" container "test-container-volume-nfspvc-f47f": the server rejected our request for an unknown reason (get pods pod-subpath-test-nfspvc-f47f)

作成した Pod が Pending のまま テスト中の Pod の状態を確認

$ kubectl get pods -n e2e-tests-subpath-shrdw
NAME                           READY     STATUS     RESTARTS   AGE
nfs-server                     1/1       Running    0          22s
pod-subpath-test-nfspvc-4tf5   0/2       Init:0/1   0          14s
$
$ kubectl describe pod pod-subpath-test-nfspvc-4tf5 -n e2e-tests-subpath-shrdw
Name:               pod-subpath-test-nfspvc-4tf5
Namespace:          e2e-tests-subpath-shrdw
Priority:           0
PriorityClassName:  <none>
Node:               k8s-node01/192.168.1.109
Start Time:         Wed, 19 Sep 2018 22:04:30 +0000
Labels:             <none>
Annotations:        <none>
Status:             Pending
IP:
Init Containers:
  init-volume-nfspvc-4tf5:
    Container ID:
    Image:          busybox
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /probe-volume from liveness-probe-volume (rw)
      /test-volume from test-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7xrsn (ro)
Containers:
  test-container-subpath-nfspvc-4tf5:
    Container ID:
    Image:         gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Args:
      --file_content_in_loop=/test-volume/test-file
      --retry_time=10
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /probe-volume from liveness-probe-volume (rw)
      /test-volume from test-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7xrsn (ro)
  test-container-volume-nfspvc-4tf5:
    Container ID:
    Image:         gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Args:
      --new_file_0644=/test-volume/e2e-tests-subpath-shrdw/test-file
      --file_mode=/test-volume/e2e-tests-subpath-shrdw/test-file
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /probe-volume from liveness-probe-volume (rw)
      /test-volume from test-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7xrsn (ro)
Conditions:
  Type              Status
  Initialized       False
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  test-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-c87lv
    ReadOnly:   false
  liveness-probe-volume:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  default-token-7xrsn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-7xrsn
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age   From                 Message
  ----     ------       ----  ----                 -------
  Normal   Scheduled    32s   default-scheduler    Successfully assigned e2e-tests-subpath-shrdw/pod-subpath-test-nfspvc-4tf5 to k8s-node01
  Warning  FailedMount  31s   kubelet, k8s-node01  MountVolume.SetUp failed for volume "nfs-7kxbp" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/fbc26075-bc57-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-7kxbp --scope -- mount -t nfs 10.244.1.119:/exports /var/lib/kubelet/pods/fbc26075-bc57-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-7kxbp
Output: Running scope as unit run-ra21a1a843da047d0ae3eb67273fd617b.scope.
mount: wrong fs type, bad option, bad superblock on 10.244.1.119:/exports,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
  Warning  FailedMount  31s  kubelet, k8s-node01  MountVolume.SetUp failed for volume "nfs-7kxbp" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/fbc26075-bc57-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-7kxbp --scope -- mount -t nfs 10.244.1.119:/exports /var/lib/kubelet/pods/fbc26075-bc57-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-7kxbp
Output: Running scope as unit run-rd477193a3ff64722b7b7b2d737f4edad.scope.
mount: wrong fs type, bad option, bad superblock on 10.244.1.119:/exports,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
  Warning  FailedMount  30s  kubelet, k8s-node01  MountVolume.SetUp failed for volume "nfs-7kxbp" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/fbc26075-bc57-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-7kxbp --scope -- mount -t nfs 10.244.1.119:/exports /var/lib/kubelet/pods/fbc26075-bc57-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-7kxbp
Output: Running scope as unit run-r576540b5b75f4439b8b73eeff64ec514.scope.
mount: wrong fs type, bad option, bad superblock on 10.244.1.119:/exports,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
  Warning  FailedMount  28s  kubelet, k8s-node01  MountVolume.SetUp failed for volume "nfs-7kxbp" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/fbc26075-bc57-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-7kxbp --scope -- mount -t nfs 10.244.1.119:/exports /var/lib/kubelet/pods/fbc26075-bc57-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-7kxbp
Output: Running scope as unit run-r0ccddff96f5f4f7b92d765f81be1f850.scope.
mount: wrong fs type, bad option, bad superblock on 10.244.1.119:/exports,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
  Warning  FailedMount  24s  kubelet, k8s-node01  MountVolume.SetUp failed for volume "nfs-7kxbp" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/fbc26075-bc57-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-7kxbp --scope -- mount -t nfs 10.244.1.119:/exports /var/lib/kubelet/pods/fbc26075-bc57-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-7kxbp
Output: Running scope as unit run-r1ef60482f9d543b1826e6bf69cee6a42.scope.
mount: wrong fs type, bad option, bad superblock on 10.244.1.119:/exports,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
  Warning  FailedMount  16s  kubelet, k8s-node01  MountVolume.SetUp failed for volume "nfs-7kxbp" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/fbc26075-bc57-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-7kxbp --scope -- mount -t nfs 10.244.1.119:/exports /var/lib/kubelet/pods/fbc26075-bc57-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-7kxbp
Output: Running scope as unit run-r4b9803d54e0b41e5a45ef1a74b05544d.scope.
mount: wrong fs type, bad option, bad superblock on 10.244.1.119:/exports,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
$
oomichi commented 5 years ago

Pod の Spec が環境にマッチしていない? Pod の Spec

 363         return &v1.Pod{
 364                 ObjectMeta: metav1.ObjectMeta{
 365                         Name:      fmt.Sprintf("pod-subpath-test-%s", suffix),
                     ★Pod名がテスト環境と一致
 366                         Namespace: f.Namespace.Name,
 367                 },
 368                 Spec: v1.PodSpec{
 369                         InitContainers: []v1.Container{
 370                                 {
 371                                         Name:  fmt.Sprintf("init-volume-%s", suffix),
 372                                         Image: "busybox",
 373                                         VolumeMounts: []v1.VolumeMount{
 374                                                 {
 375                                                         Name:      volumeName,
 376                                                         MountPath: volumePath,
 377                                                 },
 378                                                 {
 379                                                         Name:      probeVolumeName,
 380                                                         MountPath: probeVolumePath,
 381                                                 },
 382                                         },
 383                                         SecurityContext: &v1.SecurityContext{
 384                                                 Privileged: &privileged,
 385                                         },
 386                                 },
 387                         },
 388                         Containers: []v1.Container{
 389                                 {
 390                                         Name:  fmt.Sprintf("test-container-subpath-%s", suffix),
 391                                         Image: mountImage,
 392                                         VolumeMounts: []v1.VolumeMount{
 393                                                 {
 394                                                         Name:      volumeName,
 395                                                         MountPath: volumePath,
 396                                                         SubPath:   subpath,
 397                                                 },
 398                                                 {
 399                                                         Name:      probeVolumeName,
 400                                                         MountPath: probeVolumePath,
 401                                                 },
 402                                         },
 403                                         SecurityContext: &v1.SecurityContext{
 404                                                 Privileged: &privileged,
 405                                         },
 406                                 },
 407                                 {
 408                                         Name:  fmt.Sprintf("test-container-volume-%s", suffix),
 409                                         Image: mountImage,
 410                                         VolumeMounts: []v1.VolumeMount{
 411                                                 {
 412                                                         Name:      volumeName,
 413                                                         MountPath: volumePath,
 414                                                 },
 415                                                 {
 416                                                         Name:      probeVolumeName,
 417                                                         MountPath: probeVolumePath,
 418                                                 },
 419                                         },
 420                                         SecurityContext: &v1.SecurityContext{
 421                                                 Privileged: &privileged,
 422                                         },
 423                                 },
 424                         },
 425                         RestartPolicy:                 v1.RestartPolicyNever,
 426                         TerminationGracePeriodSeconds: &gracePeriod,
 427                         Volumes: []v1.Volume{
 428                                 {
 429                                         Name:         volumeName,
 430                                         VolumeSource: *source,
 431                                 },
 432                                 {
 433                                         Name: probeVolumeName,
 434                                         VolumeSource: v1.VolumeSource{
 435                                                 EmptyDir: &v1.EmptyDirVolumeSource{},
 436                                         },
 437                                 },
 438                         },
 439                         SecurityContext: &v1.PodSecurityContext{
 440                                 SELinuxOptions: &v1.SELinuxOptions{
 441                                         Level: "s0:c0,c1",
 442                                 },
 443                         },
 444                 },
oomichi commented 5 years ago

InitContainerのマウント処理で失敗している模様

  Warning  FailedMount  31s   kubelet, k8s-node01  MountVolume.SetUp failed for volume "nfs-7kxbp" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/fbc26075-bc57-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-7kxbp --scope -- mount -t nfs 10.244.1.119:/exports /var/lib/kubelet/pods/fbc26075-bc57-11e8-a146-fa163e420595/volumes/kubernetes.io~nfs/nfs-7kxbp
Output: Running scope as unit run-ra21a1a843da047d0ae3eb67273fd617b.scope.
mount: wrong fs type, bad option, bad superblock on 10.244.1.119:/exports,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

https://askubuntu.com/questions/525243/why-do-i-get-wrong-fs-type-bad-option-bad-superblock-error を参考に k8s-master、k8s-node01 に nfs-common をインストールしてみた。 通るようになった。