First, the pod is running, but if I look inside it, I get the following:
rsyslogd: file '/home/vmtsyslog/rsyslog/log.txt': open error: Transport endpoint is not connected [v8.24.0-34.el7 try http://www.rsyslog.com/e/2433 ] rsyslogd: file '/home/vmtsyslog/rsyslog/log.txt': open error: Transport endpoint is not connected [v8.24.0-34.el7 try http://www.rsyslog.com/e/2433 ]
I am seeing the following issue in when a pod is restarted:
` Warning FailedMount 80s kubelet, node1 MountVolume.SetUp failed for volume "pvc-6c5f93c7-6a69-11e9-afc2-005056b8205d" : mount failed: mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/bc96d36e-6b58-11e9-afc2-005056b8205d/volumes/kubernetes.io~glusterfs/pvc-6c5f93c7-6a69-11e9-afc2-005056b8205d --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.10.169.130,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-6c5f93c7-6a69-11e9-afc2-005056b8205d/rsyslog-84c46b64b5-pt9ln-glusterfs.log,log-level=ERROR 10.10.169.130:vol_b6437db92ff5404ebb9009a363fd9310 /var/lib/kubelet/pods/bc96d36e-6b58-11e9-afc2-005056b8205d/volumes/kubernetes.io~glusterfs/pvc-6c5f93c7-6a69-11e9-afc2-005056b8205d
Output: Running scope as unit run-9343.scope.
Mount failed. Please check the log file for more details.
the following error information was pulled from the glusterfs log to help diagnose this issue:
[2019-04-30 15:00:48.361858] E [fuse-bridge.c:900:fuse_getattr_resume] 0-glusterfs-fuse: 2: GETATTR 1 (00000000-0000-0000-0000-000000000001) resolution failed
[2019-04-30 15:00:48.367460] E [fuse-bridge.c:900:fuse_getattr_resume] 0-glusterfs-fuse: 3: GETATTR 1 (00000000-0000-0000-0000-000000000001) resolution failed
Warning FailedMount 79s kubelet, node1 MountVolume.SetUp failed for volume "pvc-6c5f93c7-6a69-11e9-afc2-005056b8205d" : mount failed: mount failed: exit status 1
Mounting command: systemd-run
`
This stays in this state until I go into the gluster pod, stop and start the volume and then restart the pod.
First, the pod is running, but if I look inside it, I get the following:
rsyslogd: file '/home/vmtsyslog/rsyslog/log.txt': open error: Transport endpoint is not connected [v8.24.0-34.el7 try http://www.rsyslog.com/e/2433 ] rsyslogd: file '/home/vmtsyslog/rsyslog/log.txt': open error: Transport endpoint is not connected [v8.24.0-34.el7 try http://www.rsyslog.com/e/2433 ]
I am seeing the following issue in when a pod is restarted:
` Warning FailedMount 80s kubelet, node1 MountVolume.SetUp failed for volume "pvc-6c5f93c7-6a69-11e9-afc2-005056b8205d" : mount failed: mount failed: exit status 1 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/bc96d36e-6b58-11e9-afc2-005056b8205d/volumes/kubernetes.io~glusterfs/pvc-6c5f93c7-6a69-11e9-afc2-005056b8205d --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.10.169.130,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-6c5f93c7-6a69-11e9-afc2-005056b8205d/rsyslog-84c46b64b5-pt9ln-glusterfs.log,log-level=ERROR 10.10.169.130:vol_b6437db92ff5404ebb9009a363fd9310 /var/lib/kubelet/pods/bc96d36e-6b58-11e9-afc2-005056b8205d/volumes/kubernetes.io~glusterfs/pvc-6c5f93c7-6a69-11e9-afc2-005056b8205d Output: Running scope as unit run-9343.scope. Mount failed. Please check the log file for more details.
the following error information was pulled from the glusterfs log to help diagnose this issue: [2019-04-30 15:00:48.361858] E [fuse-bridge.c:900:fuse_getattr_resume] 0-glusterfs-fuse: 2: GETATTR 1 (00000000-0000-0000-0000-000000000001) resolution failed [2019-04-30 15:00:48.367460] E [fuse-bridge.c:900:fuse_getattr_resume] 0-glusterfs-fuse: 3: GETATTR 1 (00000000-0000-0000-0000-000000000001) resolution failed Warning FailedMount 79s kubelet, node1 MountVolume.SetUp failed for volume "pvc-6c5f93c7-6a69-11e9-afc2-005056b8205d" : mount failed: mount failed: exit status 1 Mounting command: systemd-run `
This stays in this state until I go into the gluster pod, stop and start the volume and then restart the pod.
Looking at the pv and pvc's all are bound.
Please let me know if you need more information.