rexray / rexray

REX-Ray is a container storage orchestration engine enabling persistence for cloud native workloads
http://rexray.io
Apache License 2.0
2.17k stars 327 forks source link

docker: Error response from daemon: VolumeDriver.Mount: {"Error":"no device name returned"}. #469

Closed akamalov closed 8 years ago

akamalov commented 8 years ago

Greetings,

Problem: Getting error mounting volume onto a container

OS

NAME="Red Hat Enterprise Linux Server"
VERSION="7.2 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.2"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.2 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.2:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.2
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.2"

REX-Ray

[root@node1 software]# rexray version
Binary: /usr/bin/rexray
SemVer: 0.3.3
OsArch: Linux-x86_64
Branch: v0.3.3
Commit: b30fb870b5b94cd8368824460ad020bfcf20be3a
Formed: Thu, 21 Apr 2016 15:41:36 EDT
[root@node1 software]# 

I can create a volume or list a volume without a problem:

Create a volume:

[root@node4 ~]# docker volume create --driver=rexray --name=test --opt=size=1
test
[root@node4 ~]#

List volume:

[root@node4 ~]# docker volume ls
DRIVER              VOLUME NAME
local               3c937cf6cc3307672649335564663025beeaa2ecacbfb662f29ed7103e0fb05c
local               5b2070e7d031c046360b5f3fe866584584e3aa1da114c87cc0183b39aaca883e
local               697eaf10358700587947391a58db085d90942254ceb73309b62f4b334b037d09
local               fe2a808bf6e13d4b31c47cc6844d6d42110a6ebd1f99aa6dc9f5bc05ad4e730c
rexray              mongo005
rexray              scaleio-ds1
rexray              test
rexray              test001
[root@mslave4 ~]#

Mount volume:

[root@node4 ~]# docker run -ti --volume-driver=rexray -v test:/test busybox
docker: Error response from daemon: VolumeDriver.Mount: {"Error":"no device name returned"}.
[root@node4 ~]# 
akamalov commented 8 years ago

Some debug output with logLevel: debug


                          ⌐▄Q▓▄Ç▓▄,▄_
                         Σ▄▓▓▓▓▓▓▓▓▓▓▄π
                       ╒▓▓▌▓▓▓▓▓▓▓▓▓▓▀▓▄▄.
                    ,_▄▀▓▓ ▓▓ ▓▓▓▓▓▓▓▓▓▓▓█
                   │▄▓▓ _▓▓▓▓▓▓▓▓▓┌▓▓▓▓▓█
                  _J┤▓▓▓▓▓▓▓▓▓▓▓▓▓├█▓█▓▀Γ
            ,▄▓▓▓▓▓▓^██▓▓▓▓▓▓▓▓▓▓▓▓▄▀▄▄▓▓Ω▄
            F▌▓▌█ⁿⁿⁿ  ⁿ└▀ⁿ██▓▀▀▀▀▀▀▀▀▀▀▌▓▓▓▌
             'ⁿ_  ,▄▄▄▄▄▄▄▄▄█_▄▄▄▄▄▄▄▄▄ⁿ▀~██
               Γ  ├▓▓▓▓▓█▀ⁿ█▌▓Ω]█▓▓▓▓▓▓ ├▓
               │  ├▓▓▓▓▓▌≡,__▄▓▓▓█▓▓▓▓▓ ╞█~   Y,┐
               ╞  ├▓▓▓▓▓▄▄__^^▓▓▓▌▓▓▓▓▓  ▓   /▓▓▓
                  ├▓▓▓▓▓▓▓▄▄═▄▓▓▓▓▓▓▓▓▓  π ⌐▄▓▓█║n
                _ ├▓▓▓▓▓▓▓▓▓~▓▓▓▓▓▓▓▓▓▓  ▄4▄▓▓▓██
                µ ├▓▓▓▓█▀█▓▓_▓▓███▓▓▓▓▓  ▓▓▓▓▓Ω4
                µ ├▓▀▀L   └ⁿ  ▀   ▀ ▓▓█w ▓▓▓▀ìⁿ
                ⌐ ├_                τ▀▓  Σ⌐└
                ~ ├▓▓  ▄  _     ╒  ┌▄▓▓  Γ
                  ├▓▓▓▌█═┴▓▄╒▀▄_▄▌═¢▓▓▓  ╚
               ⌠  ├▓▓▓▓▓ⁿ▄▓▓▓▓▓▓▓┐▄▓▓▓▓  └
               Ω_.└██▓▀ⁿÇⁿ▀▀▀▀▀▀█≡▀▀▀▀▀   µ
               ⁿ  .▄▄▓▓▓▓▄▄┌ ╖__▓_▄▄▄▄▄*Oⁿ
                 û▌├▓█▓▓▓██ⁿ ¡▓▓▓▓▓▓▓▓█▓╪
                 ╙Ω▀█ ▓██ⁿ    └█▀██▀▓█├█Å
                     ⁿⁿ             ⁿ ⁿ^
:::::::..  .,::::::    .,::      .::::::::..    :::.  .-:.     ::-.
;;;;'';;;; ;;;;''''    ';;;,  .,;; ;;;;'';;;;   ;;';;  ';;.   ;;;;'
 [[[,/[[['  [[cccc       '[[,,[['   [[[,/[[['  ,[[ '[[,  '[[,[[['
 $$$$$$c    $$""""        Y$$$Pcccc $$$$$$c   c$$$cc$$$c   c$$"
 888b "88bo,888oo,__    oP"''"Yo,   888b "88bo,888   888,,8P"'
 MMMM   "W" """"YUMMM,m"       "Mm, MMMM   "W" YMM   ""'mM"

Binary: /usr/bin/rexray
SemVer: 0.3.3
OsArch: Linux-x86_64
Branch: v0.3.3
Commit: b30fb870b5b94cd8368824460ad020bfcf20be3a
Formed: Thu, 21 Apr 2016 15:41:36 EDT

time="2016-06-22T06:46:42-04:00" level=info msg="created pid file, pid=505" 
time="2016-06-22T06:46:42-04:00" level=debug msg="initializing configuration" 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue="tcp://:7979" envVar="REXRAY_HOST" flagName=host keyName=rexray.host keyType=0 usage="The REX-Ray host" 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=warn envVar="REXRAY_LOGLEVEL" flagName=logLevel keyName=rexray.logLevel keyType=0 usage="The log level (error, warn, info, debug)" 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=linux envVar="REXRAY_OSDRIVERS" flagName=osDrivers keyName=rexray.osDrivers keyType=0 usage="The OS drivers to consider" 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="REXRAY_STORAGEDRIVERS" flagName=storageDrivers keyName=rexray.storageDrivers keyType=0 usage="The storage drivers to consider" 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=docker envVar="REXRAY_VOLUMEDRIVERS" flagName=volumeDrivers keyName=rexray.volumeDrivers keyType=0 usage="The volume drivers to consider" 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=448 envVar="LINUX_VOLUME_FILEMODE" flagName=linuxVolumeFilemode keyName=linux.volume.filemode keyType=1 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue="/data" envVar="LINUX_VOLUME_ROOTPATH" flagName=linuxVolumeRootpath keyName=linux.volume.rootpath keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="AWS_ACCESSKEY" flagName=awsAccessKey keyName=aws.accessKey keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="AWS_SECRETKEY" flagName=awsSecretKey keyName=aws.secretKey keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="AWS_REGION" flagName=awsRegion keyName=aws.region keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="AWS_REXRAYTAG" flagName=awsRexrayTag keyName=aws.rexrayTag keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="GCE_KEYFILE" flagName=gceKeyfile keyName=gce.keyfile keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="ISILON_ENDPOINT" flagName=isilonEndpoint keyName=isilon.endpoint keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="ISILON_INSECURE" flagName=isilonInsecure keyName=isilon.insecure keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="ISILON_USERNAME" flagName=isilonUserName keyName=isilon.userName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="ISILON_GROUP" flagName=isilonGroup keyName=isilon.group keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="ISILON_PASSWORD" flagName=isilonPassword keyName=isilon.password keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="ISILON_VOLUMEPATH" flagName=isilonVolumePath keyName=isilon.volumePath keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="ISILON_NFSHOST" flagName=isilonNfsHost keyName=isilon.nfsHost keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="ISILON_DATASUBNET" flagName=isilonDataSubnet keyName=isilon.dataSubnet keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="ISILON_QUOTAS" flagName=isilonQuotas keyName=isilon.quotas keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_AUTHURL" flagName=openstackAuthURL keyName=openstack.authURL keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_USERID" flagName=openstackUserID keyName=openstack.userID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_USERNAME" flagName=openstackUserName keyName=openstack.userName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_PASSWORD" flagName=openstackPassword keyName=openstack.password keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_TENANTID" flagName=openstackTenantID keyName=openstack.tenantID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_TENANTNAME" flagName=openstackTenantName keyName=openstack.tenantName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_DOMAINID" flagName=openstackDomainID keyName=openstack.domainID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_DOMAINNAME" flagName=openstackDomainName keyName=openstack.domainName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_REGIONNAME" flagName=openstackRegionName keyName=openstack.regionName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_AVAILABILITYZONENAME" flagName=openstackAvailabilityZoneName keyName=openstack.availabilityZoneName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="RACKSPACE_AUTHURL" flagName=rackspaceAuthURL keyName=rackspace.authURL keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="RACKSPACE_USERID" flagName=rackspaceUserID keyName=rackspace.userID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="RACKSPACE_USERNAME" flagName=rackspaceUserName keyName=rackspace.userName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="RACKSPACE_PASSWORD" flagName=rackspacePassword keyName=rackspace.password keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="RACKSPACE_TENANTID" flagName=rackspaceTenantID keyName=rackspace.tenantID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="RACKSPACE_TENANTNAME" flagName=rackspaceTenantName keyName=rackspace.tenantName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="RACKSPACE_DOMAINID" flagName=rackspaceDomainID keyName=rackspace.domainID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="RACKSPACE_DOMAINNAME" flagName=rackspaceDomainName keyName=rackspace.domainName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_ENDPOINT" flagName=scaleioEndpoint keyName=scaleio.endpoint keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="SCALEIO_INSECURE" flagName=scaleioInsecure keyName=scaleio.insecure keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="SCALEIO_USECERTS" flagName=scaleioUseCerts keyName=scaleio.useCerts keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_APIVERSION" flagName=scaleioApiVersion keyName=scaleio.apiVersion keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_USERID" flagName=scaleioUserID keyName=scaleio.userID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_USERNAME" flagName=scaleioUserName keyName=scaleio.userName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_PASSWORD" flagName=scaleioPassword keyName=scaleio.password keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_SYSTEMID" flagName=scaleioSystemID keyName=scaleio.systemID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_SYSTEMNAME" flagName=scaleioSystemName keyName=scaleio.systemName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_PROTECTIONDOMAINID" flagName=scaleioProtectionDomainID keyName=scaleio.protectionDomainID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_PROTECTIONDOMAINNAME" flagName=scaleioProtectionDomainName keyName=scaleio.protectionDomainName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_STORAGEPOOLID" flagName=scaleioStoragePoolID keyName=scaleio.storagePoolID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_STORAGEPOOLNAME" flagName=scaleioStoragePoolName keyName=scaleio.storagePoolName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_THINORTHICK" flagName=scaleioThinOrThick keyName=scaleio.thinOrThick keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VIRTUALBOX_ENDPOINT" flagName=virtualboxEndpoint keyName=virtualbox.endpoint keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VIRTUALBOX_VOLUMEPATH" flagName=virtualboxVolumePath keyName=virtualbox.volumePath keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VIRTUALBOX_LOCALMACHINENAMEORID" flagName=virtualboxLocalMachineNameOrId keyName=virtualbox.localMachineNameOrId keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VIRTUALBOX_USERNAME" flagName=virtualboxUsername keyName=virtualbox.username keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VIRTUALBOX_PASSWORD" flagName=virtualboxPassword keyName=virtualbox.password keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="VIRTUALBOX_TLS" flagName=virtualboxTls keyName=virtualbox.tls keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VIRTUALBOX_CONTROLLERNAME" flagName=virtualboxControllerName keyName=virtualbox.controllerName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_SMISHOST" flagName=vmaxSmishost keyName=vmax.smishost keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_SMISPORT" flagName=vmaxSmisport keyName=vmax.smisport keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="VMAX_INSECURE" flagName=vmaxInsecure keyName=vmax.insecure keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_USERNAME" flagName=vmaxUserName keyName=vmax.userName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_PASSWORD" flagName=vmaxPassword keyName=vmax.password keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_SID" flagName=vmaxSid keyName=vmax.sid keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_VOLUMEPREFIX" flagName=vmaxVolumePrefix keyName=vmax.volumePrefix keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_STORAGEGROUP" flagName=vmaxStorageGroup keyName=vmax.storageGroup keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="VMAX_VMH_INSECURE" flagName=vmaxVmhInsecure keyName=vmax.vmh.insecure keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_VMH_USERNAME" flagName=vmaxVmhUserName keyName=vmax.vmh.userName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_VMH_PASSWORD" flagName=vmaxVmhPassword keyName=vmax.vmh.password keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_VMH_HOST" flagName=vmaxVmhHost keyName=vmax.vmh.host keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="XTREMIO_ENDPOINT" flagName=xtremioEndpoint keyName=xtremio.endpoint keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="XTREMIO_INSECURE" flagName=xtremioInsecure keyName=xtremio.insecure keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="XTREMIO_USERNAME" flagName=xtremioUserName keyName=xtremio.userName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="XTREMIO_PASSWORD" flagName=xtremioPassword keyName=xtremio.password keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="XTREMIO_DEVICEMAPPER" flagName=xtremioDeviceMapper keyName=xtremio.deviceMapper keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="XTREMIO_MULTIPATH" flagName=xtremioMultipath keyName=xtremio.multipath keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="XTREMIO_REMOTEMANAGEMENT" flagName=xtremioRemoteManagement keyName=xtremio.remoteManagement keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="DOCKER_FSTYPE" flagName=dockerFsType keyName=docker.fsType keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="DOCKER_VOLUMETYPE" flagName=dockerVolumeType keyName=docker.volumeType keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="DOCKER_IOPS" flagName=dockerIops keyName=docker.iops keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="DOCKER_SIZE" flagName=dockerSize keyName=docker.size keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="DOCKER_AVAILABILITYZONE" flagName=dockerAvailabilityZone keyName=docker.availabilityZone keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue="/data" envVar="LINUX_VOLUME_ROOTPATH" flagName=linuxVolumeRootpath keyName=linux.volume.rootpath keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="REXRAY_VOLUME_MOUNT_PREEMPT" flagName=preempt keyName=rexray.volume.mount.preempt keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="loading global config file" path="/etc/rexray/config.yml" 
time="2016-06-22T06:46:42-04:00" level=debug msg="got modules map" count=2 
time="2016-06-22T06:46:42-04:00" level=debug msg="processing module config" name=default-admin 
time="2016-06-22T06:46:42-04:00" level=debug msg="getting scoped config for module" scope=rexray.modules.default-admin 
time="2016-06-22T06:46:42-04:00" level=info msg="created new mod config" addr="tcp://127.0.0.1:7979" desc="The default admin module." name=default-admin type=admin 
time="2016-06-22T06:46:42-04:00" level=debug msg="processing module config" name=default-docker 
time="2016-06-22T06:46:42-04:00" level=debug msg="getting scoped config for module" scope=rexray.modules.default-docker 
time="2016-06-22T06:46:42-04:00" level=info msg="created new mod config" addr="unix:///run/docker/plugins/rexray.sock" desc="The default docker module." name=default-docker type=docker 
time="2016-06-22T06:46:42-04:00" level=info msg="initialized module instance" address="tcp://127.0.0.1:7979" name=default-admin typeName=admin 
time="2016-06-22T06:46:42-04:00" level=debug msg="initializing configuration" 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue="tcp://:7979" envVar="REXRAY_HOST" flagName=host keyName=rexray.host keyType=0 usage="The REX-Ray host" 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=warn envVar="REXRAY_LOGLEVEL" flagName=logLevel keyName=rexray.logLevel keyType=0 usage="The log level (error, warn, info, debug)" 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=linux envVar="REXRAY_OSDRIVERS" flagName=osDrivers keyName=rexray.osDrivers keyType=0 usage="The OS drivers to consider" 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="REXRAY_STORAGEDRIVERS" flagName=storageDrivers keyName=rexray.storageDrivers keyType=0 usage="The storage drivers to consider" 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=docker envVar="REXRAY_VOLUMEDRIVERS" flagName=volumeDrivers keyName=rexray.volumeDrivers keyType=0 usage="The volume drivers to consider" 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=448 envVar="LINUX_VOLUME_FILEMODE" flagName=linuxVolumeFilemode keyName=linux.volume.filemode keyType=1 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue="/data" envVar="LINUX_VOLUME_ROOTPATH" flagName=linuxVolumeRootpath keyName=linux.volume.rootpath keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="AWS_ACCESSKEY" flagName=awsAccessKey keyName=aws.accessKey keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="AWS_SECRETKEY" flagName=awsSecretKey keyName=aws.secretKey keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="AWS_REGION" flagName=awsRegion keyName=aws.region keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="AWS_REXRAYTAG" flagName=awsRexrayTag keyName=aws.rexrayTag keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="GCE_KEYFILE" flagName=gceKeyfile keyName=gce.keyfile keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="ISILON_ENDPOINT" flagName=isilonEndpoint keyName=isilon.endpoint keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="ISILON_INSECURE" flagName=isilonInsecure keyName=isilon.insecure keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="ISILON_USERNAME" flagName=isilonUserName keyName=isilon.userName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="ISILON_GROUP" flagName=isilonGroup keyName=isilon.group keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="ISILON_PASSWORD" flagName=isilonPassword keyName=isilon.password keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="ISILON_VOLUMEPATH" flagName=isilonVolumePath keyName=isilon.volumePath keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="ISILON_NFSHOST" flagName=isilonNfsHost keyName=isilon.nfsHost keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="ISILON_DATASUBNET" flagName=isilonDataSubnet keyName=isilon.dataSubnet keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="ISILON_QUOTAS" flagName=isilonQuotas keyName=isilon.quotas keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_AUTHURL" flagName=openstackAuthURL keyName=openstack.authURL keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_USERID" flagName=openstackUserID keyName=openstack.userID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_USERNAME" flagName=openstackUserName keyName=openstack.userName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_PASSWORD" flagName=openstackPassword keyName=openstack.password keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_TENANTID" flagName=openstackTenantID keyName=openstack.tenantID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_TENANTNAME" flagName=openstackTenantName keyName=openstack.tenantName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_DOMAINID" flagName=openstackDomainID keyName=openstack.domainID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_DOMAINNAME" flagName=openstackDomainName keyName=openstack.domainName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_REGIONNAME" flagName=openstackRegionName keyName=openstack.regionName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="OPENSTACK_AVAILABILITYZONENAME" flagName=openstackAvailabilityZoneName keyName=openstack.availabilityZoneName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="RACKSPACE_AUTHURL" flagName=rackspaceAuthURL keyName=rackspace.authURL keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="RACKSPACE_USERID" flagName=rackspaceUserID keyName=rackspace.userID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="RACKSPACE_USERNAME" flagName=rackspaceUserName keyName=rackspace.userName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="RACKSPACE_PASSWORD" flagName=rackspacePassword keyName=rackspace.password keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="RACKSPACE_TENANTID" flagName=rackspaceTenantID keyName=rackspace.tenantID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="RACKSPACE_TENANTNAME" flagName=rackspaceTenantName keyName=rackspace.tenantName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="RACKSPACE_DOMAINID" flagName=rackspaceDomainID keyName=rackspace.domainID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="RACKSPACE_DOMAINNAME" flagName=rackspaceDomainName keyName=rackspace.domainName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_ENDPOINT" flagName=scaleioEndpoint keyName=scaleio.endpoint keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="SCALEIO_INSECURE" flagName=scaleioInsecure keyName=scaleio.insecure keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="SCALEIO_USECERTS" flagName=scaleioUseCerts keyName=scaleio.useCerts keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_APIVERSION" flagName=scaleioApiVersion keyName=scaleio.apiVersion keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_USERID" flagName=scaleioUserID keyName=scaleio.userID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_USERNAME" flagName=scaleioUserName keyName=scaleio.userName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_PASSWORD" flagName=scaleioPassword keyName=scaleio.password keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_SYSTEMID" flagName=scaleioSystemID keyName=scaleio.systemID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_SYSTEMNAME" flagName=scaleioSystemName keyName=scaleio.systemName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_PROTECTIONDOMAINID" flagName=scaleioProtectionDomainID keyName=scaleio.protectionDomainID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_PROTECTIONDOMAINNAME" flagName=scaleioProtectionDomainName keyName=scaleio.protectionDomainName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_STORAGEPOOLID" flagName=scaleioStoragePoolID keyName=scaleio.storagePoolID keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_STORAGEPOOLNAME" flagName=scaleioStoragePoolName keyName=scaleio.storagePoolName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="SCALEIO_THINORTHICK" flagName=scaleioThinOrThick keyName=scaleio.thinOrThick keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VIRTUALBOX_ENDPOINT" flagName=virtualboxEndpoint keyName=virtualbox.endpoint keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VIRTUALBOX_VOLUMEPATH" flagName=virtualboxVolumePath keyName=virtualbox.volumePath keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VIRTUALBOX_LOCALMACHINENAMEORID" flagName=virtualboxLocalMachineNameOrId keyName=virtualbox.localMachineNameOrId keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VIRTUALBOX_USERNAME" flagName=virtualboxUsername keyName=virtualbox.username keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VIRTUALBOX_PASSWORD" flagName=virtualboxPassword keyName=virtualbox.password keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="VIRTUALBOX_TLS" flagName=virtualboxTls keyName=virtualbox.tls keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VIRTUALBOX_CONTROLLERNAME" flagName=virtualboxControllerName keyName=virtualbox.controllerName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_SMISHOST" flagName=vmaxSmishost keyName=vmax.smishost keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_SMISPORT" flagName=vmaxSmisport keyName=vmax.smisport keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="VMAX_INSECURE" flagName=vmaxInsecure keyName=vmax.insecure keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_USERNAME" flagName=vmaxUserName keyName=vmax.userName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_PASSWORD" flagName=vmaxPassword keyName=vmax.password keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_SID" flagName=vmaxSid keyName=vmax.sid keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_VOLUMEPREFIX" flagName=vmaxVolumePrefix keyName=vmax.volumePrefix keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_STORAGEGROUP" flagName=vmaxStorageGroup keyName=vmax.storageGroup keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="VMAX_VMH_INSECURE" flagName=vmaxVmhInsecure keyName=vmax.vmh.insecure keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_VMH_USERNAME" flagName=vmaxVmhUserName keyName=vmax.vmh.userName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_VMH_PASSWORD" flagName=vmaxVmhPassword keyName=vmax.vmh.password keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="VMAX_VMH_HOST" flagName=vmaxVmhHost keyName=vmax.vmh.host keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="XTREMIO_ENDPOINT" flagName=xtremioEndpoint keyName=xtremio.endpoint keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="XTREMIO_INSECURE" flagName=xtremioInsecure keyName=xtremio.insecure keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="XTREMIO_USERNAME" flagName=xtremioUserName keyName=xtremio.userName keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="XTREMIO_PASSWORD" flagName=xtremioPassword keyName=xtremio.password keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="XTREMIO_DEVICEMAPPER" flagName=xtremioDeviceMapper keyName=xtremio.deviceMapper keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="XTREMIO_MULTIPATH" flagName=xtremioMultipath keyName=xtremio.multipath keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="XTREMIO_REMOTEMANAGEMENT" flagName=xtremioRemoteManagement keyName=xtremio.remoteManagement keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="DOCKER_FSTYPE" flagName=dockerFsType keyName=docker.fsType keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="DOCKER_VOLUMETYPE" flagName=dockerVolumeType keyName=docker.volumeType keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="DOCKER_IOPS" flagName=dockerIops keyName=docker.iops keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="DOCKER_SIZE" flagName=dockerSize keyName=docker.size keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue= envVar="DOCKER_AVAILABILITYZONE" flagName=dockerAvailabilityZone keyName=docker.availabilityZone keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue="/data" envVar="LINUX_VOLUME_ROOTPATH" flagName=linuxVolumeRootpath keyName=linux.volume.rootpath keyType=0 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="adding flag" defaultValue=false envVar="REXRAY_VOLUME_MOUNT_PREEMPT" flagName=preempt keyName=rexray.volume.mount.preempt keyType=2 usage= 
time="2016-06-22T06:46:42-04:00" level=debug msg="loading global config file" path="/etc/rexray/config.yml" 
time="2016-06-22T06:46:42-04:00" level=debug msg="constructed driver" driverName=Isilon 
time="2016-06-22T06:46:42-04:00" level=debug msg="constructed driver" driverName=Rackspace 
time="2016-06-22T06:46:42-04:00" level=debug msg="constructed driver" driverName=ScaleIO 
time="2016-06-22T06:46:42-04:00" level=debug msg="constructed driver" driverName=XtremIO 
time="2016-06-22T06:46:42-04:00" level=debug msg="constructed driver" driverName=docker 
time="2016-06-22T06:46:42-04:00" level=debug msg="constructed driver" driverName=linux 
time="2016-06-22T06:46:42-04:00" level=debug msg="constructed driver" driverName=ec2 
time="2016-06-22T06:46:42-04:00" level=debug msg="constructed driver" driverName=gce 
time="2016-06-22T06:46:42-04:00" level=debug msg="constructed driver" driverName=Openstack 
time="2016-06-22T06:46:42-04:00" level=debug msg="constructed driver" driverName=virtualbox 
time="2016-06-22T06:46:42-04:00" level=debug msg="constructed driver" driverName=VMAX 
time="2016-06-22T06:46:42-04:00" level=info msg="initialized module instance" address="unix:///run/docker/plugins/rexray.sock" name=default-docker typeName=docker 
time="2016-06-22T06:46:42-04:00" level=info msg="started module" address="tcp://127.0.0.1:7979" name=default-admin typeName=admin 
time="2016-06-22T06:46:42-04:00" level=info msg="[linux]" 
time="2016-06-22T06:46:42-04:00" level=info msg="[docker]" 
time="2016-06-22T06:46:42-04:00" level=info msg="[scaleio]" 
time="2016-06-22T06:46:42-04:00" level=debug msg="core get drivers" osDrivers=[linux] storageDrivers=[scaleio] volumeDrivers=[docker] 
time="2016-06-22T06:46:42-04:00" level=info msg="storage driver initialized" apiVersion=2 endpoint="https://192.168.120.166/api" insecure=true moduleName=default-docker provider=ScaleIO useCerts=false 
time="2016-06-22T06:46:42-04:00" level=info msg="docker volume driver initialized" availabilityZone= iops= moduleName=default-docker provider=docker size= volumeRootPath="/data" volumeType= 
time="2016-06-22T06:46:42-04:00" level=info msg="os driver initialized" moduleName=default-docker provider=linux 
time="2016-06-22T06:46:42-04:00" level=debug msg="checking volume path cache setting" pathCache=true 
time="2016-06-22T06:46:42-04:00" level=info msg=vdm.List driverName=docker moduleName=default-docker 
time="2016-06-22T06:46:42-04:00" level=info msg="listing volumes" driverName=docker moduleName=default-docker 
time="2016-06-22T06:46:42-04:00" level=debug msg="got instance" instance=&{ScaleIO d50382690000000b  } moduleName=default-docker provider=ScaleIO 
time="2016-06-22T06:46:42-04:00" level=info msg=sdm.GetVolume driverName=ScaleIO moduleName=default-docker volumeID= volumeName= 
time="2016-06-22T06:46:42-04:00" level=info msg=odm.GetMounts deviceName= driverName=linux moduleName=default-docker mountPoint= 
time="2016-06-22T06:46:42-04:00" level=debug msg="docker voldriver spec file" path="/etc/docker/plugins/rexray.spec" 
time="2016-06-22T06:46:42-04:00" level=info msg="started module" address="unix:///run/docker/plugins/rexray.sock" name=default-docker typeName=docker 
time="2016-06-22T06:46:42-04:00" level=info msg="service sent registered modules start signals" 
time="2016-06-22T06:46:42-04:00" level=info msg="service successfully initialized, waiting on stop signal" 
time="2016-06-22T06:46:57-04:00" level=info msg=vdm.Path driverName=docker moduleName=default-docker volumeID= volumeName=test 
time="2016-06-22T06:46:57-04:00" level=debug msg="skipping path lookup" driverName=docker moduleName=default-docker volumeID= volumeName=test 
time="2016-06-22T06:46:57-04:00" level=info msg=vdm.Mount driverName=docker moduleName=default-docker newFsType= overwriteFs=false preempt=false volumeID= volumeName=test 
time="2016-06-22T06:46:57-04:00" level=info msg="mounting volume" driverName=docker moduleName=default-docker newFsType= overwriteFs=false volumeID= volumeName=test 
time="2016-06-22T06:46:57-04:00" level=debug msg="got instance" instance=&{ScaleIO d50382690000000b  } moduleName=default-docker provider=ScaleIO 
time="2016-06-22T06:46:57-04:00" level=info msg=sdm.GetVolume driverName=ScaleIO moduleName=default-docker volumeID= volumeName=test 
time="2016-06-22T06:46:57-04:00" level=info msg=sdm.GetVolumeAttach driverName=ScaleIO instanceID=d50382690000000b moduleName=default-docker volumeID=c047cb2900000003 
time="2016-06-22T06:46:57-04:00" level=error msg="/VolumeDriver.Mount: error mounting volume" error="no device name returned" 
time="2016-06-22T06:46:57-04:00" level=error msg="no device name returned" 
time="2016-06-22T06:46:57-04:00" level=info msg=vdm.Unmount driverName=docker moduleName=default-docker volumeID= volumeName=test 
time="2016-06-22T06:46:57-04:00" level=info msg="initialized count" count=0 moduleName=default-docker volumeName=test 
time="2016-06-22T06:46:57-04:00" level=info msg="unmounting volume" driverName=docker moduleName=default-docker volumeID= volumeName=test 
time="2016-06-22T06:46:57-04:00" level=debug msg="got instance" instance=&{ScaleIO d50382690000000b  } moduleName=default-docker provider=ScaleIO 
time="2016-06-22T06:46:57-04:00" level=info msg=sdm.GetVolume driverName=ScaleIO moduleName=default-docker volumeID= volumeName=test 
time="2016-06-22T06:46:58-04:00" level=info msg=sdm.GetVolumeAttach driverName=ScaleIO instanceID=d50382690000000b moduleName=default-docker volumeID=c047cb2900000003 
time="2016-06-22T06:46:58-04:00" level=info msg=odm.GetMounts deviceName= driverName=linux moduleName=default-docker mountPoint= 
time="2016-06-22T06:46:58-04:00" level=info msg=odm.Unmount driverName=linux moduleName=default-docker mountPoint="/sys" 
time="2016-06-22T06:46:59-04:00" level=error msg="/VolumeDriver.Unmount: error unmounting volume" error="device or resource busy" 
time="2016-06-22T06:46:59-04:00" level=error msg="device or resource busy" 
kacole2 commented 8 years ago

@akamalov docker version? Assuming 1.11? Only a problem with test volume?

akamalov commented 8 years ago

Hey Kenny, yep docker version is 1.11.2

clintkitson commented 8 years ago

@akamalov Could you return a rexray volume map and a rexray volume specific to the volume you are trying to return? Would also be curious to see what scli reports as to where this volume is mapped to.

Is this happening for all volumes from this SIO cluster? Is it happening for all SDCs?

akamalov commented 8 years ago

Q: Is this happening for all volumes from this SIO cluster? Is it happening for all SDCs?

A: No, ScaleIO is serving Datastores for ESXi nodes and so far nothing is affecting them

Q: Could you return a rexray volume map

A: Here it is...

[root@mnode4 rexray]# rexray volume map
INFO[0000] [linux]                                      
INFO[0000] [docker]                                     
INFO[0000] [scaleio]                                    
INFO[0000] storage driver initialized                    apiVersion=2 endpoint=https://192.168.120.166/api insecure=true moduleName= provider=ScaleIO useCerts=false
INFO[0000] docker volume driver initialized              availabilityZone= iops= moduleName= provider=docker size= volumeRootPath=/data volumeType=
INFO[0000] os driver initialized                         moduleName= provider=linux
- providername: ScaleIO
  instanceid: d50382690000000b
  volumeid: c047cb2900000003
  devicename: /dev/scinia
  region: 076534e45e5ed7ca
  status: ""
  networkname: ""

[root@mnode4 rexray]# 

Q: ...rexray volume specific to the volume you are trying to return

A: Here it is...

[root@mnode4 rexray]# rexray volume get --volumename="test"
INFO[0000] [linux]                                      
INFO[0000] [docker]                                     
INFO[0000] [scaleio]                                    
INFO[0000] docker volume driver initialized              availabilityZone= iops= moduleName= provider=docker size= volumeRootPath=/data volumeType=
INFO[0000] storage driver initialized                    apiVersion=2 endpoint=https://192.168.120.166/api insecure=true moduleName= provider=ScaleIO useCerts=false
INFO[0000] os driver initialized                         moduleName= provider=linux
INFO[0000] sdm.GetVolume                                 driverName=ScaleIO moduleName= volumeID= volumeName=test
- name: test
  volumeid: c047cb2900000003
  availabilityzone: domain1
  status: ""
  volumetype: pool1
  iops: 0
  size: "16"
  networkname: ""
  attachments:
  - volumeid: c047cb2900000003
    instanceid: d503826c0000000e
    devicename: ""
    status: ""

[root@mnode4 rexray]# 

Query volume - test using 'scli':

ScaleIO-10-60-120-150:~ # scli --mdm_ip 192.168.120.150 --query_volume_tree --volume_name test
VTree ID: d47f8cdd00000003
        Total snapshots size: 0 Bytes
        Total capacity in use: 0 Bytes

>> Volume ID: c047cb2900000003 Name: test Thin-provisioned Size: 16.0 GB (16384 MB)
   Capacity in use: 0 Bytes
   Trimmed capacity: 0 Bytes
   Storage Pool 907f19e300000000 Name: pool1
   Protection Domain 33399d2500000000 Name: domain1
   Creation time: 2016-06-21 16:25:55
   Uses RAM Read Cache
   Read bandwidth:  0 IOPS 0 Bytes per-second
   Write bandwidth: 0 IOPS 0 Bytes per-second
   Mapped SDC:
      SDC ID: d503826c0000000e IP: 192.168.120.165 Name: N/A

ScaleIO-10-60-120-150:~ # 

Trying to spin up a docker container and map the volume - node4 is 192.168.120.164: :

[root@mnode4 rexray]# docker run -ti --volume-driver=rexray -v test:/test busybox
docker: Error response from daemon: VolumeDriver.Mount: {"Error":"read /dev/scinia: input/output error"}.
[root@mnode4 rexray]# 

Trying to spin up a docker container on another host - node5, which is 192.168.120.165

[root@node5 ~]# docker run -ti --volume-driver=rexray -v test:/test busybox
docker: Error response from daemon: VolumeDriver.Mount: {"Error":"error waiting on volume to mount"}.
[root@node5 ~]# 

Display systemd status of REX-Ray:

[root@node5 ~]# systemctl -l status rexray
● rexray.service - rexray
   Loaded: loaded (/etc/systemd/system/rexray.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2016-06-20 10:27:49 EDT; 3 days ago
 Main PID: 7675 (rexray)
   Memory: 24.0M
   CGroup: /system.slice/rexray.service
           └─7675 /usr/bin/rexray start -f

Jun 23 11:49:19 node5 rexray[7675]: time="2016-06-23T11:49:19-04:00" level=error msg="error waiting on volume to mount" inner.provider=ScaleIO instanceId=d503826c0000000e moduleName=default-docker provider=ScaleIO runAsync=false volumeId=c047cb2900000003
Jun 23 11:49:20 node5 rexray[7675]: time="2016-06-23T11:49:20-04:00" level=info msg=vdm.Unmount driverName=docker moduleName=default-docker volumeID= volumeName=test
Jun 23 11:49:20 node5 rexray[7675]: time="2016-06-23T11:49:20-04:00" level=info msg="initialized count" count=0 moduleName=default-docker volumeName=test
Jun 23 11:49:20 node5 rexray[7675]: time="2016-06-23T11:49:20-04:00" level=info msg="unmounting volume" driverName=docker moduleName=default-docker volumeID= volumeName=test
Jun 23 11:49:20 node5 rexray[7675]: time="2016-06-23T11:49:20-04:00" level=info msg=sdm.GetVolume driverName=ScaleIO moduleName=default-docker volumeID= volumeName=test
Jun 23 11:49:20 node5 rexray[7675]: time="2016-06-23T11:49:20-04:00" level=info msg=sdm.GetVolumeAttach driverName=ScaleIO instanceID=d503826c0000000e moduleName=default-docker volumeID=c047cb2900000003
Jun 23 11:49:20 node5 rexray[7675]: time="2016-06-23T11:49:20-04:00" level=info msg=odm.GetMounts deviceName= driverName=linux moduleName=default-docker mountPoint=
Jun 23 11:49:20 node5 rexray[7675]: time="2016-06-23T11:49:20-04:00" level=info msg=odm.Unmount driverName=linux moduleName=default-docker mountPoint="/sys"
Jun 23 11:49:21 node5 rexray[7675]: time="2016-06-23T11:49:21-04:00" level=error msg="/VolumeDriver.Unmount: error unmounting volume" error="device or resource busy"
Jun 23 11:49:21 node5 rexray[7675]: time="2016-06-23T11:49:21-04:00" level=error msg="device or resource busy"
[root@node5 ~]#

Now, trying to map the volume using scli between nodes node4 (192.168.120.164) and node5 (192.168.120.165):

ScaleIO-192-168-120-150:~ # scli --mdm_ip 192.168.120.150 --map_volume_to_sdc --volume_name test --sdc_ip  192.168.120.165
Successfully mapped volume test to SDC 192.168.120.165

ScaleIO-192-168-120-150:~ # scli --mdm_ip 192.168.120.150 --volume_name test --unmap_volume_from_sdc --sdc_ip 192.168.120.165 --i_am_sure
Successfully un-mapped volume test from SDC 192.168.120.165

ScaleIO-192-168-120-150:~ # scli --mdm_ip 192.168.120.150 --map_volume_to_sdc --volume_name test --sdc_ip  192.168.120.164
Successfully mapped volume test to SDC 192.168.120.164

ScaleIO-192-168-120-150:~ # scli --mdm_ip 192.168.120.150 --volume_name test --unmap_volume_from_sdc --sdc_ip 192.168.120.164 --i_am_sure
Successfully un-mapped volume test from SDC 192.168.120.164
ScaleIO-192-168-120-150:~ # 

As you see, no problem mapping and unmapping using scli

Just FYI, journalctl output for REX-Ray which is depicting message Jun 23 11:49:21 node5 rexray[7675]: time="2016-06-23T11:49:21-04:00" level=error msg="device or resource busy

[root@node5 ~]# journalctl -u rexray -lr
-- Logs begin at Tue 2016-06-21 23:36:53 EDT, end at Thu 2016-06-23 11:51:02 EDT. --
Jun 23 11:49:21 node5 rexray[7675]: time="2016-06-23T11:49:21-04:00" level=error msg="device or resource busy"
Jun 23 11:49:21 node5 rexray[7675]: time="2016-06-23T11:49:21-04:00" level=error msg="/VolumeDriver.Unmount: error unmount
Jun 23 11:49:20 node5 rexray[7675]: time="2016-06-23T11:49:20-04:00" level=info msg=odm.Unmount driverName=linux moduleNam
Jun 23 11:49:20 node5 rexray[7675]: time="2016-06-23T11:49:20-04:00" level=info msg=odm.GetMounts deviceName= driverName=l
Jun 23 11:49:20 node5 rexray[7675]: time="2016-06-23T11:49:20-04:00" level=info msg=sdm.GetVolumeAttach driverName=ScaleIO
Jun 23 11:49:20 node5 rexray[7675]: time="2016-06-23T11:49:20-04:00" level=info msg=sdm.GetVolume driverName=ScaleIO modul
Jun 23 11:49:20 node5 rexray[7675]: time="2016-06-23T11:49:20-04:00" level=info msg="unmounting volume" driverName=docker 
Jun 23 11:49:20 node5 rexray[7675]: time="2016-06-23T11:49:20-04:00" level=info msg="initialized count" count=0 moduleName
Jun 23 11:49:20 node5 rexray[7675]: time="2016-06-23T11:49:20-04:00" level=info msg=vdm.Unmount driverName=docker moduleNa
Jun 23 11:49:19 node5 rexray[7675]: time="2016-06-23T11:49:19-04:00" level=error msg="error waiting on volume to mount" in
Jun 23 11:49:19 node5 rexray[7675]: time="2016-06-23T11:49:19-04:00" level=error msg="/VolumeDriver.Mount: error mounting 
Jun 23 11:49:09 node5 rexray[7675]: time="2016-06-23T11:49:09-04:00" level=info msg=sdm.AttachVolume driverName=ScaleIO fo
Jun 23 11:49:09 node5 rexray[7675]: time="2016-06-23T11:49:09-04:00" level=info msg=odm.Unmount driverName=linux moduleNam
Jun 23 11:49:09 node5 rexray[7675]: time="2016-06-23T11:49:09-04:00" level=info msg=sdm.GetVolumeAttach driverName=ScaleIO
Jun 23 11:49:09 node5 rexray[7675]: time="2016-06-23T11:49:09-04:00" level=info msg=sdm.GetVolume driverName=ScaleIO modul
Jun 23 11:49:09 node5 rexray[7675]: time="2016-06-23T11:49:09-04:00" level=info msg="mounting volume" driverName=docker mo
Jun 23 11:49:09 node5 rexray[7675]: time="2016-06-23T11:49:09-04:00" level=info msg=vdm.Mount driverName=docker moduleName
Jun 23 11:49:09 node5 rexray[7675]: time="2016-06-23T11:49:09-04:00" level=info msg=sdm.GetVolumeAttach driverName=ScaleIO
Jun 23 11:49:09 node5 rexray[7675]: time="2016-06-23T11:49:09-04:00" level=info msg=sdm.GetVolume driverName=ScaleIO modul
Jun 23 11:49:09 node5 rexray[7675]: time="2016-06-23T11:49:09-04:00" level=info msg="getting path to volume" driverName=do
Jun 23 11:49:09 node5 rexray[7675]: time="2016-06-23T11:49:09-04:00" level=info msg=vdm.Path driverName=docker moduleName=
Jun 23 11:35:16 node5 rexray[7675]: time="2016-06-23T11:35:16-04:00" level=info msg=odm.GetMounts deviceName= driverName=l
Jun 23 11:35:16 node5 rexray[7675]: time="2016-06-23T11:35:16-04:00" level=info msg=sdm.GetVolumeAttach driverName=ScaleIO
Jun 23 11:35:15 node5 rexray[7675]: time="2016-06-23T11:35:15-04:00" level=info msg=sdm.GetVolume driverName=ScaleIO modul
Jun 23 11:35:15 node5 rexray[7675]: time="2016-06-23T11:35:15-04:00" level=info msg="getting path to volume" driverName=do
Jun 23 11:35:15 node5 rexray[7675]: time="2016-06-23T11:35:15-04:00" level=info msg=vdm.Path driverName=docker moduleName=
Jun 23 11:34:09 node5 rexray[7675]: time="2016-06-23T11:34:09-04:00" level=info msg=odm.GetMounts deviceName= driverName=l
Jun 23 11:34:08 node5 rexray[7675]: time="2016-06-23T11:34:08-04:00" level=info msg=sdm.GetVolumeAttach driverName=ScaleIO
Jun 23 11:34:08 node5 rexray[7675]: time="2016-06-23T11:34:08-04:00" level=info msg=sdm.GetVolume driverName=ScaleIO modul
Jun 23 11:34:08 node5 rexray[7675]: time="2016-06-23T11:34:08-04:00" level=info msg="getting path to volume" driverName=do
Jun 23 11:34:08 node5 rexray[7675]: time="2016-06-23T11:34:08-04:00" level=info msg=vdm.Path driverName=docker moduleName=
Jun 23 11:21:37 node5 rexray[7675]: time="2016-06-23T11:21:37-04:00" level=error msg="device or resource busy"
Jun 23 11:21:37 node5 rexray[7675]: time="2016-06-23T11:21:37-04:00" level=error msg="/VolumeDriver.Unmount: error unmount
Jun 23 11:21:36 node5 rexray[7675]: time="2016-06-23T11:21:36-04:00" level=info msg=odm.Unmount driverName=linux moduleNam
Jun 23 11:21:36 node5 rexray[7675]: time="2016-06-23T11:21:36-04:00" level=info msg=odm.GetMounts deviceName= driverName=l
Jun 23 11:21:36 node5 rexray[7675]: time="2016-06-23T11:21:36-04:00" level=info msg=sdm.GetVolumeAttach driverName=ScaleIO
Jun 23 11:21:36 node5 rexray[7675]: time="2016-06-23T11:21:36-04:00" level=info msg=sdm.GetVolume driverName=ScaleIO modul
Jun 23 11:21:36 node5 rexray[7675]: time="2016-06-23T11:21:36-04:00" level=info msg="unmounting volume" driverName=docker 
Jun 23 11:21:36 node5 rexray[7675]: time="2016-06-23T11:21:36-04:00" level=info msg="initialized count" count=0 moduleName
Jun 23 11:21:36 node5 rexray[7675]: time="2016-06-23T11:21:36-04:00" level=info msg=vdm.Unmount driverName=docker moduleNa
Jun 23 11:21:36 node5 rexray[7675]: time="2016-06-23T11:21:36-04:00" level=error msg="error waiting on volume to mount" in
Jun 23 11:21:36 node5 rexray[7675]: time="2016-06-23T11:21:36-04:00" level=error msg="/VolumeDriver.Mount: error mounting 
Jun 23 11:21:25 node5 rexray[7675]: time="2016-06-23T11:21:25-04:00" level=info msg=sdm.AttachVolume driverName=ScaleIO fo
Jun 23 11:21:25 node5 rexray[7675]: time="2016-06-23T11:21:25-04:00" level=info msg=odm.Unmount driverName=linux moduleNam
Jun 23 11:21:25 node5 rexray[7675]: time="2016-06-23T11:21:25-04:00" level=info msg=sdm.GetVolumeAttach driverName=ScaleIO
Jun 23 11:21:25 node5 rexray[7675]: time="2016-06-23T11:21:25-04:00" level=info msg=sdm.GetVolume driverName=ScaleIO modul
Jun 23 11:21:25 node5 rexray[7675]: time="2016-06-23T11:21:25-04:00" level=info msg="mounting volume" driverName=docker mo
Jun 23 11:21:25 node5 rexray[7675]: time="2016-06-23T11:21:25-04:00" level=info msg=vdm.Mount driverName=docker moduleName
[root@node5 ~]# 

Thanks again for your help!!

clintkitson commented 8 years ago

Thanks @akamalov.

How about the output of /opt/emc/scaleio/sdc/bin/drv_cfg --query_vols and ls /dev/disk/by-id?

akamalov commented 8 years ago

Here is the output:

[root@node4 ~]# /opt/emc/scaleio/sdc/bin/drv_cfg --query_vols
Retrieved 1 volume(s)
VOL-ID c047cb2900000003 MDM-ID 076534e45e5ed7ca
[root@node4 ~]# 

Volume c047cb2900000003 is the same as the volume test I've been trying to mount using docker run:

[root@node4 ~]# rexray volume get --volumename="test"
INFO[0000] [linux]                                      
INFO[0000] [docker]                                     
INFO[0000] [scaleio]                                    
INFO[0000] docker volume driver initialized              availabilityZone= iops= moduleName= provider=docker size= volumeRootPath=/data volumeType=
INFO[0000] storage driver initialized                    apiVersion=2 endpoint=https://192.168.120.166/api insecure=true moduleName= provider=ScaleIO useCerts=false
INFO[0000] os driver initialized                         moduleName= provider=linux
INFO[0000] sdm.GetVolume                                 driverName=ScaleIO moduleName= volumeID= volumeName=test
- name: test
  volumeid: c047cb2900000003
  availabilityzone: domain1
  status: ""
  volumetype: pool1
  iops: 0
  size: "16"
  networkname: ""
  attachments:
  - volumeid: c047cb2900000003
    instanceid: d503826c0000000e
    devicename: ""
    status: ""

[root@node4 ~]# 

Output of ls /dev/disk/by-id

[root@node4 ~]# ls /dev/disk/by-id
ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001
dm-name-vgcore-opt_bsa_bladelogic
dm-name-vgcore-users
dm-name-vgcore-usr_bltemp
dm-name-vgsys-diskdump
dm-name-vgsys-root
dm-name-vgsys-swap
dm-name-vgsys-tmp
dm-name-vgsys-var
dm-name-vgsys-var_log
dm-name-vgsys-var_tmp
dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQC36GA4IQGxH7F28dAYO0VAMst8l76nzYz
dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCFekHIQ3HXBksyHuS3BldToeDt805jtRu
dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCfUA0KLvTrf2WmwYukv5rp5Bfg0pjE5q9
dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCoQBm5xSeUwlsdT2uzsJJ70LXEowD4BHv
dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCqaQXUXaAlyhRxIZL0BOPMNyZe2dXNj5d
dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCsXItwxaeSznsYV02me9MRZe8IlWXtjus
dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCyn7IxT0ZhJBJxsHXrh11SMy00QvlexDZ
dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1ASqnpvAoIDSP96RmB1w520GIER5UWyVY
dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1kDGXUNH0rv4QdkY1K2jvyyDg7g79csOe
dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1TlGVVBP9yMpqeVBXwytdVXDDRywdPo7Z
emc-vol-076534e45e5ed7ca-c047cb2900000003
lvm-pv-uuid-cslnL6-qXaR-xAQB-rnxl-0Se6-KUCR-IocCJz
lvm-pv-uuid-farDq1-WPZp-U3h5-oSPL-cS0C-ajdD-j3odJ7
[root@node4 ~]# 
clintkitson commented 8 years ago

@akamalov My apologies for the delay. I am curious about where the link to emc-vol-076534e45e5ed7ca-c047cb2900000003 is. Can you try and perform a ls -laF /dev/disk/by-id instead? This link is how we determine what the device name would be in the attachments field.

akamalov commented 8 years ago

Thanks @clintonskitson. Here it is:

[root@node4 ~]# ls -laF /dev/disk/by-id
total 0
drwxr-xr-x 2 root root 500 Jun 26 05:02 ./
drwxr-xr-x 5 root root 100 Jun 20 09:57 ../
lrwxrwxrwx 1 root root   9 Jun 20 09:57 ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001 -> ../../sr0
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgcore-opt_bsa_bladelogic -> ../../dm-9
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgcore-users -> ../../dm-7
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgcore-usr_bltemp -> ../../dm-8
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-diskdump -> ../../dm-6
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-root -> ../../dm-1
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-swap -> ../../dm-0
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-tmp -> ../../dm-5
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-var -> ../../dm-4
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-var_log -> ../../dm-2
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-var_tmp -> ../../dm-3
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQC36GA4IQGxH7F28dAYO0VAMst8l76nzYz -> ../../dm-4
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCFekHIQ3HXBksyHuS3BldToeDt805jtRu -> ../../dm-6
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCfUA0KLvTrf2WmwYukv5rp5Bfg0pjE5q9 -> ../../dm-2
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCoQBm5xSeUwlsdT2uzsJJ70LXEowD4BHv -> ../../dm-0
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCqaQXUXaAlyhRxIZL0BOPMNyZe2dXNj5d -> ../../dm-5
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCsXItwxaeSznsYV02me9MRZe8IlWXtjus -> ../../dm-3
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCyn7IxT0ZhJBJxsHXrh11SMy00QvlexDZ -> ../../dm-1
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1ASqnpvAoIDSP96RmB1w520GIER5UWyVY -> ../../dm-9
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1kDGXUNH0rv4QdkY1K2jvyyDg7g79csOe -> ../../dm-7
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1TlGVVBP9yMpqeVBXwytdVXDDRywdPo7Z -> ../../dm-8
lrwxrwxrwx 1 root root  10 Jun 20 09:57 lvm-pv-uuid-cslnL6-qXaR-xAQB-rnxl-0Se6-KUCR-IocCJz -> ../../sda2
lrwxrwxrwx 1 root root  10 Jun 20 09:57 lvm-pv-uuid-farDq1-WPZp-U3h5-oSPL-cS0C-ajdD-j3odJ7 -> ../../sdb1
[root@nodee4 ~]# 
clintkitson commented 8 years ago

Hmm not seeing the previously listed EMC device that was there. So what I am getting at here is that we use this link to determine what the valid device path is. If the device is attached, and it has a path here then RR shouldn't have a problem mounting it at that point.

If the volume is currently attached yet the device is not being listed here would represent the problem and direct it to SIO versus RR.

akamalov commented 8 years ago

@clintonskitson, I apologize. It looks like the device comes up in the node node4 only if I issue scli command from ScaleIO to map the volume. Here is the sequence of commands and output:

Map volume to node4:

ScaleIO-192-168-120-150:~ # scli --mdm_ip 192.168.120.151 --map_volume_to_sdc --volume_name test --sdc_ip  192.168.120.164
Successfully mapped volume test to SDC 192.168.120.164
ScaleIO-192-168-120-150:~ # 

From node4, list disks:

[root@node4 ~]# ls -laF /dev/disk/by-id
total 0
drwxr-xr-x 2 root root 520 Jun 29 06:51 ./
drwxr-xr-x 5 root root 100 Jun 20 09:57 ../
lrwxrwxrwx 1 root root   9 Jun 20 09:57 ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001 -> ../../sr0
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgcore-opt_bsa_bladelogic -> ../../dm-9
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgcore-users -> ../../dm-7
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgcore-usr_bltemp -> ../../dm-8
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-diskdump -> ../../dm-6
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-root -> ../../dm-1
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-swap -> ../../dm-0
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-tmp -> ../../dm-5
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-var -> ../../dm-4
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-var_log -> ../../dm-2
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-var_tmp -> ../../dm-3
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQC36GA4IQGxH7F28dAYO0VAMst8l76nzYz -> ../../dm-4
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCFekHIQ3HXBksyHuS3BldToeDt805jtRu -> ../../dm-6
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCfUA0KLvTrf2WmwYukv5rp5Bfg0pjE5q9 -> ../../dm-2
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCoQBm5xSeUwlsdT2uzsJJ70LXEowD4BHv -> ../../dm-0
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCqaQXUXaAlyhRxIZL0BOPMNyZe2dXNj5d -> ../../dm-5
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCsXItwxaeSznsYV02me9MRZe8IlWXtjus -> ../../dm-3
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCyn7IxT0ZhJBJxsHXrh11SMy00QvlexDZ -> ../../dm-1
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1ASqnpvAoIDSP96RmB1w520GIER5UWyVY -> ../../dm-9
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1kDGXUNH0rv4QdkY1K2jvyyDg7g79csOe -> ../../dm-7
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1TlGVVBP9yMpqeVBXwytdVXDDRywdPo7Z -> ../../dm-8
lrwxrwxrwx 1 root root  12 Jun 29 06:51 emc-vol-076534e45e5ed7ca-c047cb2900000003 -> ../../scinia
lrwxrwxrwx 1 root root  10 Jun 20 09:57 lvm-pv-uuid-cslnL6-qXaR-xAQB-rnxl-0Se6-KUCR-IocCJz -> ../../sda2
lrwxrwxrwx 1 root root  10 Jun 20 09:57 lvm-pv-uuid-farDq1-WPZp-U3h5-oSPL-cS0C-ajdD-j3odJ7 -> ../../sdb1
[root@node4 ~]# 

Peculiar thing: while watching node4 /var/log/messages, I happened to notice a lot of messages such as:

##
Jun 29 06:48:36 node4 env: unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 06:48:36 node4 env: 06:48:36.006 [warning] unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 06:48:36 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 06:48:38 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 06:48:40 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 06:48:41 node4 env: unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 06:48:41 node4 env: 06:48:41.007 [warning] unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 06:48:42 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 06:48:44 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 06:48:46 node4 env: unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 06:48:46 node4 env: 06:48:46.009 [warning] unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 06:48:46 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 06:48:47 node4 kernel: blk_update_request: I/O error, dev scinia, sector 0
Jun 29 06:48:47 node4 kernel: Buffer I/O error on device scinia, logical block 0
Jun 29 06:48:48 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 06:48:50 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 06:48:51 node4 env: unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 06:48:51 node4 env: 06:48:51.011 [warning] unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 06:48:52 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 06:48:54 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 06:48:56 node4 env: unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 06:48:56 node4 env: 06:48:56.013 [warning] unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 06:48:56 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68

Don't know if it is of any value, but thought worth mentioning...

Thanks again for helping!!!

Alex

akamalov commented 8 years ago

Here is another example, I am trying to mount the volume via REX-Ray

[root@node4 ~]# docker run -ti --volume-driver=rexray -v test:/test busybox
docker: Error response from daemon: VolumeDriver.Mount: {"Error":"error waiting on volume to mount"}.
[root@node4 ~]# 

Monitoring from the other console:

[root@node4 ~]# tail -f /var/log/messages
Jun 29 07:01:47 node4 rexray: time="2016-06-29T07:01:47-04:00" level=info msg=sdm.GetVolume driverName=ScaleIO moduleName=default-docker volumeID= volumeName=test
Jun 29 07:01:47 node4 rexray: time="2016-06-29T07:01:47-04:00" level=info msg=sdm.GetVolumeAttach driverName=ScaleIO instanceID=d50382690000000b moduleName=default-docker volumeID=c047cb2900000003
Jun 29 07:01:47 node4 rexray: time="2016-06-29T07:01:47-04:00" level=info msg=odm.GetMounts deviceName= driverName=linux moduleName=default-docker mountPoint=
Jun 29 07:01:47 node4 rexray: time="2016-06-29T07:01:47-04:00" level=info msg=odm.Unmount driverName=linux moduleName=default-docker mountPoint="/sys"
Jun 29 07:01:48 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 07:01:48 node4 rexray: time="2016-06-29T07:01:48-04:00" level=error msg="/VolumeDriver.Unmount: error unmounting volume" error="device or resource busy"
Jun 29 07:01:48 node4 rexray: time="2016-06-29T07:01:48-04:00" level=error msg="device or resource busy"
Jun 29 07:01:48 node4 docker: time="2016-06-29T07:01:48.174925571-04:00" level=warning msg="5e0d19eed017716e03f205fc056e17050e4a9500b2c06465e9f69939e9ec3c55 cleanup: Failed to umount volumes: VolumeDriver.Unmount: {\"Error\":\"device or resource busy\"}\n"
Jun 29 07:01:48 node4 docker: time="2016-06-29T07:01:48.175126615-04:00" level=error msg="Handler for POST /v1.23/containers/5e0d19eed017716e03f205fc056e17050e4a9500b2c06465e9f69939e9ec3c55/start returned error: VolumeDriver.Mount: {\"Error\":\"error waiting on volume to mount\"}\n"
Jun 29 07:01:50 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 07:01:51 node4 env: unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 07:01:51 node4 env: 07:01:51.346 [warning] unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 07:01:52 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 07:01:54 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 07:01:56 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 07:01:56 node4 env: unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 07:01:56 node4 env: 07:01:56.348 [warning] unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 07:01:58 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 07:02:00 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 07:02:01 node4 kernel: blk_update_request: I/O error, dev scinia, sector 0
Jun 29 07:02:01 node4 kernel: Buffer I/O error on device scinia, logical block 0
Jun 29 07:02:01 node4 env: unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 07:02:01 node4 env: 07:02:01.350 [warning] unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 07:02:02 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 07:02:04 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 07:02:06 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 07:02:06 node4 systemd: Starting Generate resolv.conf: Update systemd-resolved for MesosDNS...
Jun 29 07:02:06 node4 systemd: Starting Logrotate: Rotate various logs on the system...
Jun 29 07:02:06 node4 systemd: Started Logrotate: Rotate various logs on the system.
Jun 29 07:02:06 node4 gen_resolvconf.py: Updating /etc/resolv.conf
Jun 29 07:02:06 node4 gen_resolvconf.py: # Generated by gen_resolvconf.py. Do not edit.
Jun 29 07:02:06 node4 gen_resolvconf.py: # Change configuration options by changing DC/OS cluster configuration.
Jun 29 07:02:06 node4 gen_resolvconf.py: # This file must be overwritten regularly for proper cluster operation around
Jun 29 07:02:06 node4 gen_resolvconf.py: # master failure.
Jun 29 07:02:06 node4 gen_resolvconf.py: options timeout:1
Jun 29 07:02:06 node4 gen_resolvconf.py: options attempts:3
Jun 29 07:02:06 node4 env: unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 07:02:06 node4 env: 07:02:06.353 [warning] unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 07:02:06 node4 systemd: Started Generate resolv.conf: Update systemd-resolved for MesosDNS.
Jun 29 07:02:08 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 07:02:10 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 07:02:11 node4 env: unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 07:02:11 node4 env: 07:02:11.355 [warning] unknown SubSys/MsgType: rtnetlink/newqdisc
Jun 29 07:02:12 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
Jun 29 07:02:14 node4 kernel: ScaleIO R2_0 netChan_SendReq_CK:3118 :Chan ffff8807f7730088 establish rc=68
clintkitson commented 8 years ago

Can you perform a rexray volume attach -l debug on that volume ID? Also what is the underlying version of SIO?

clintkitson commented 8 years ago

@akamalov By the way, in the last log the problem is likely that there is a process currently running that is using resources from the mounted volume. If you perform a lsof | grep <path> you should be able to determine what that process is. Typically RR is used in a workflow, so the container exiting would signify that all handles etc on the volume are closed making an unmount possible.

akamalov commented 8 years ago

Can you perform a rexray volume attach -l debug

A: Here is the result:

[root@node4 ~]# rexray volume attach -l debug --volumeid=c047cb2900000003
DEBU[0000] updated log level                             logLevel=debug
INFO[0000] [linux]                                      
INFO[0000] [docker]                                     
INFO[0000] [scaleio]                                    
DEBU[0000] core get drivers                              osDrivers=[linux] storageDrivers=[scaleio] volumeDrivers=[docker]
INFO[0000] docker volume driver initialized              availabilityZone= iops= moduleName= provider=docker size= volumeRootPath=/data volumeType=
INFO[0000] os driver initialized                         moduleName= provider=linux
INFO[0000] storage driver initialized                    apiVersion=2 endpoint=https://192.168.120.166/api insecure=true moduleName= provider=ScaleIO useCerts=false
DEBU[0000] checking volume path cache setting            pathCache=false
INFO[0000] sdm.AttachVolume                              driverName=ScaleIO force=false instanceID= moduleName= runAsync=false volumeID=c047cb2900000003
DEBU[0000] waiting for volume mount                      provider=ScaleIO
FATA[0010] error waiting on volume to mount              inner.provider=ScaleIO instanceId= moduleName= provider=ScaleIO runAsync=false volumeId=c047cb2900000003
[root@node4 ~]# 

what is the underlying version of SIO?

A: EMC-ScaleIO-2.0-6035

clintkitson commented 8 years ago

Can you also review the list of devices in /dev/disk..

On Wed, Jun 29, 2016 at 9:36 AM, akamalov notifications@github.com wrote:

Can you perform a rexray volume attach -l debug

A: Here is the result:

[root@node4 ~]# rexray volume attach -l debug --volumeid=c047cb2900000003 DEBU[0000] updated log level logLevel=debug INFO[0000] [linux] INFO[0000] [docker] INFO[0000] [scaleio] DEBU[0000] core get drivers osDrivers=[linux] storageDrivers=[scaleio] volumeDrivers=[docker] INFO[0000] docker volume driver initialized availabilityZone= iops= moduleName= provider=docker size= volumeRootPath=/data volumeType= INFO[0000] os driver initialized moduleName= provider=linux INFO[0000] storage driver initialized apiVersion=2 endpoint=https://192.168.120.166/api insecure=true moduleName= provider=ScaleIO useCerts=false DEBU[0000] checking volume path cache setting pathCache=false INFO[0000] sdm.AttachVolume driverName=ScaleIO force=false instanceID= moduleName= runAsync=false volumeID=c047cb2900000003 DEBU[0000] waiting for volume mount provider=ScaleIO FATA[0010] error waiting on volume to mount inner.provider=ScaleIO instanceId= moduleName= provider=ScaleIO runAsync=false volumeId=c047cb2900000003 [root@node4 ~]#

what is the underlying version of SIO?

A: EMC-ScaleIO-2.0-6035

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/emccode/rexray/issues/469#issuecomment-229413793, or mute the thread https://github.com/notifications/unsubscribe/ABVMMTPRwGL2p1rr7Vx2HgxfCJ9LtzSsks5qQp8RgaJpZM4I7JWQ .

akamalov commented 8 years ago
[root@node4 ~]# ls -al /dev/disk
total 0
drwxr-xr-x  5 root root  100 Jun 20 09:57 .
drwxr-xr-x 20 root root 3380 Jun 29 12:32 ..
drwxr-xr-x  2 root root  520 Jun 29 12:36 by-id
drwxr-xr-x  2 root root  160 Jun 20 09:57 by-path
drwxr-xr-x  2 root root  260 Jun 20 09:57 by-uuid
[root@node4 ~]# 
[root@node4 ~]# ls -laF /dev/disk/by-id
total 0
drwxr-xr-x 2 root root 520 Jun 29 12:36 ./
drwxr-xr-x 5 root root 100 Jun 20 09:57 ../
lrwxrwxrwx 1 root root   9 Jun 20 09:57 ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001 -> ../../sr0
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgcore-opt_bsa_bladelogic -> ../../dm-9
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgcore-users -> ../../dm-7
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgcore-usr_bltemp -> ../../dm-8
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-diskdump -> ../../dm-6
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-root -> ../../dm-1
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-swap -> ../../dm-0
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-tmp -> ../../dm-5
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-var -> ../../dm-4
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-var_log -> ../../dm-2
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-name-vgsys-var_tmp -> ../../dm-3
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQC36GA4IQGxH7F28dAYO0VAMst8l76nzYz -> ../../dm-4
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCFekHIQ3HXBksyHuS3BldToeDt805jtRu -> ../../dm-6
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCfUA0KLvTrf2WmwYukv5rp5Bfg0pjE5q9 -> ../../dm-2
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCoQBm5xSeUwlsdT2uzsJJ70LXEowD4BHv -> ../../dm-0
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCqaQXUXaAlyhRxIZL0BOPMNyZe2dXNj5d -> ../../dm-5
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCsXItwxaeSznsYV02me9MRZe8IlWXtjus -> ../../dm-3
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCyn7IxT0ZhJBJxsHXrh11SMy00QvlexDZ -> ../../dm-1
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1ASqnpvAoIDSP96RmB1w520GIER5UWyVY -> ../../dm-9
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1kDGXUNH0rv4QdkY1K2jvyyDg7g79csOe -> ../../dm-7
lrwxrwxrwx 1 root root  10 Jun 20 09:57 dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1TlGVVBP9yMpqeVBXwytdVXDDRywdPo7Z -> ../../dm-8
lrwxrwxrwx 1 root root  12 Jun 29 12:36 emc-vol-076534e45e5ed7ca-c047cb2900000003 -> ../../scinia
lrwxrwxrwx 1 root root  10 Jun 20 09:57 lvm-pv-uuid-cslnL6-qXaR-xAQB-rnxl-0Se6-KUCR-IocCJz -> ../../sda2
lrwxrwxrwx 1 root root  10 Jun 20 09:57 lvm-pv-uuid-farDq1-WPZp-U3h5-oSPL-cS0C-ajdD-j3odJ7 -> ../../sdb1
[root@node4 ~]# 
[root@node4 ~]# ls -laF /dev/disk/by-uuid
total 0
drwxr-xr-x 2 root root 260 Jun 20 09:57 ./
drwxr-xr-x 5 root root 100 Jun 20 09:57 ../
lrwxrwxrwx 1 root root  10 Jun 20 09:57 02e06f8f-1815-4e8f-989e-3379fcc1888c -> ../../dm-1
lrwxrwxrwx 1 root root  10 Jun 20 09:57 62bc4c55-ed6b-4537-bf1c-462349b5a521 -> ../../dm-0
lrwxrwxrwx 1 root root  10 Jun 20 09:57 68acd516-3a51-4ac7-bbea-35a76e8b5605 -> ../../dm-4
lrwxrwxrwx 1 root root  10 Jun 20 09:57 6a6648f5-81d3-43b0-af8d-ce08785c4599 -> ../../dm-2
lrwxrwxrwx 1 root root  10 Jun 20 09:57 8d3e27b2-ba8c-4ae1-b21e-0a685a33e4d9 -> ../../dm-3
lrwxrwxrwx 1 root root  10 Jun 20 09:57 9a3c6ec1-a022-4cd4-b8f6-23382dede5d1 -> ../../dm-5
lrwxrwxrwx 1 root root  10 Jun 20 09:57 ca6de9ff-b82c-4bc8-a73c-2ff96bf505ac -> ../../dm-6
lrwxrwxrwx 1 root root  10 Jun 20 09:57 cf3633b3-7e80-428c-ac80-4a67ed107a60 -> ../../dm-7
lrwxrwxrwx 1 root root  10 Jun 20 09:57 d4bfdd90-443e-4212-88f1-dd865c093ff7 -> ../../sda1
lrwxrwxrwx 1 root root  10 Jun 20 09:57 ea8e6bd0-676b-43a0-a1dd-56115f052c32 -> ../../dm-8
lrwxrwxrwx 1 root root  10 Jun 20 09:57 f371fe12-6eea-4613-b983-546897861c43 -> ../../dm-9
[root@node4 ~]# ls -laF /dev/disk/by-path
total 0
drwxr-xr-x 2 root root 160 Jun 20 09:57 ./
drwxr-xr-x 5 root root 100 Jun 20 09:57 ../
lrwxrwxrwx 1 root root   9 Jun 20 09:57 pci-0000:00:07.1-ata-2.0 -> ../../sr0
lrwxrwxrwx 1 root root   9 Jun 20 09:57 pci-0000:03:00.0-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx 1 root root  10 Jun 20 09:57 pci-0000:03:00.0-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root  10 Jun 20 09:57 pci-0000:03:00.0-scsi-0:0:0:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root   9 Jun 20 09:57 pci-0000:03:00.0-scsi-0:0:1:0 -> ../../sdb
lrwxrwxrwx 1 root root  10 Jun 20 09:57 pci-0000:03:00.0-scsi-0:0:1:0-part1 -> ../../sdb1
[root@node4 ~]# 
clintkitson commented 8 years ago

Sorry one last question. What is the result of /opt/emc/scaleio/sdc/bin/drv_cfg --query_vols?

akamalov commented 8 years ago

Here you go...

[root@node4 ~]# /opt/emc/scaleio/sdc/bin/drv_cfg --query_vols
Retrieved 1 volume(s)
VOL-ID c047cb2900000003 MDM-ID 076534e45e5ed7ca
[root@node4 ~]# 
akamalov commented 8 years ago

@clintonskitson any updates by chance ?

Thanks!!

akamalov commented 8 years ago

Also wanted to provide an update that authentication and interrogation against ScaleIO is normal:

First, obtain token:

[root@node4 ~]#  curl -k --basic --user admin:XXXXXX https://192.168.120.166/api/login
"YWRtaW46MTQ2NzM0NDA4OTc0NTo0MjZhYzBkYTdjNmMyNzA1Y2RiNDZkYjZjZjUzNzA2Mg"
 [root@node4 ~]# 

Secondly, interrogate ScaleIO environment:

[root@node4 ~]# curl -k --basic --user admin:YWRtaW46MTQ2NzM0NDA4OTc0NTo0MjZhYzBkYTdjNmMyNzA1Y2RiNDZkYjZjZjUzNzA2Mg https://192.168.120.166/api/types/System/instances | jq .

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  4032    0  4032    0     0  51570      0 --:--:-- --:--:-- --:--:-- 52363
[
  {
    "defaultIsVolumeObfuscated": false,
    "restrictedSdcModeEnabled": false,
    "swid": "",
    "daysInstalled": 187,
    "maxCapacityInGb": "Unlimited",
    "capacityTimeLeftInDays": "Unlimited",
    "enterpriseFeaturesEnabled": true,
    "isInitialLicense": true,
    "installId": "52b107a55ddd3582",
    "capacityAlertHighThresholdPercent": 80,
    "capacityAlertCriticalThresholdPercent": 90,
    "remoteReadOnlyLimitState": false,
    "upgradeState": "NoUpgrade",
    "performanceParameters": {
      "mdmNumberSdcReceiveUmt": 5,
      "mdmNumberSdsReceiveUmt": 10,
      "mdmNumberSdsSendUmt": 10,
      "mdmNumberSdsKeepaliveReceiveUmt": 10,
      "mdmSdsCapacityCountersUpdateInterval": 1,
      "mdmNumberSdsTasksUmt": 1024,
      "mdmSdsCapacityCountersPollingInterval": 5,
      "mdmSdsVolumeSizePollingInterval": 15,
      "mdmSdsVolumeSizePollingRetryInterval": 5,
      "mdmInitialSdsSnapshotCapacity": 1024,
      "mdmSdsSnapshotCapacityChunkSize": 5120,
      "mdmSdsSnapshotUsedCapacityThreshold": 50,
      "mdmSdsSnapshotFreeCapacityThreshold": 200,
      "perfProfile": "HighPerformance"
    },
    "currentProfilePerformanceParameters": {
      "mdmNumberSdcReceiveUmt": 5,
      "mdmNumberSdsReceiveUmt": 10,
      "mdmNumberSdsSendUmt": 10,
      "mdmNumberSdsKeepaliveReceiveUmt": 10,
      "mdmSdsCapacityCountersUpdateInterval": 1,
      "mdmNumberSdsTasksUmt": 1024,
      "mdmSdsCapacityCountersPollingInterval": 5,
      "mdmSdsVolumeSizePollingInterval": 15,
      "mdmSdsVolumeSizePollingRetryInterval": 5,
      "mdmInitialSdsSnapshotCapacity": 1024,
      "mdmSdsSnapshotCapacityChunkSize": 5120,
      "mdmSdsSnapshotUsedCapacityThreshold": 50,
      "mdmSdsSnapshotFreeCapacityThreshold": 200,
      "perfProfile": null
    },
    "sdcMdmNetworkDisconnectionsCounterParameters": {
      "shortWindow": {
        "threshold": 300,
        "windowSizeInSec": 60
      },
      "mediumWindow": {
        "threshold": 500,
        "windowSizeInSec": 3600
      },
      "longWindow": {
        "threshold": 700,
        "windowSizeInSec": 86400
      }
    },
    "sdcSdsNetworkDisconnectionsCounterParameters": {
      "shortWindow": {
        "threshold": 800,
        "windowSizeInSec": 60
      },
      "mediumWindow": {
        "threshold": 4000,
        "windowSizeInSec": 3600
      },
      "longWindow": {
        "threshold": 20000,
        "windowSizeInSec": 86400
      }
    },
    "sdcMemoryAllocationFailuresCounterParameters": {
      "shortWindow": {
        "threshold": 300,
        "windowSizeInSec": 60
      },
      "mediumWindow": {
        "threshold": 500,
        "windowSizeInSec": 3600
      },
      "longWindow": {
        "threshold": 700,
        "windowSizeInSec": 86400
      }
    },
    "sdcSocketAllocationFailuresCounterParameters": {
      "shortWindow": {
        "threshold": 300,
        "windowSizeInSec": 60
      },
      "mediumWindow": {
        "threshold": 500,
        "windowSizeInSec": 3600
      },
      "longWindow": {
        "threshold": 700,
        "windowSizeInSec": 86400
      }
    },
    "sdcLongOperationsCounterParameters": {
      "shortWindow": {
        "threshold": 10000,
        "windowSizeInSec": 60
      },
      "mediumWindow": {
        "threshold": 100000,
        "windowSizeInSec": 3600
      },
      "longWindow": {
        "threshold": 1000000,
        "windowSizeInSec": 86400
      }
    },
    "cliPasswordAllowed": true,
    "managementClientSecureCommunicationEnabled": true,
    "tlsVersion": "TLSv1.2",
    "showGuid": true,
    "authenticationMethod": "Native",
    "mdmToSdsPolicy": "Authentication",
    "mdmCluster": {
      "goodNodesNum": 3,
      "goodReplicasNum": 2,
      "tieBreakers": [
        {
          "role": "TieBreaker",
          "status": "Normal",
          "ips": [
            "192.168.1.152"
          ],
          "versionInfo": "R2_0.6035.0",
          "managementIPs": [
            "192.168.1.152"
          ],
          "name": "TB1",
          "id": "4fc0530874c9b832",
          "port": 9011
        }
      ],
      "clusterState": "ClusteredNormal",
      "master": {
        "role": "Manager",
        "ips": [
          "192.168.1.151"
        ],
        "versionInfo": "R2_0.6035.0",
        "managementIPs": [
          "192.168.120.151"
        ],
        "name": "Manager2",
        "id": "4cbb452b1d5f7f81",
        "port": 9011
      },
      "clusterMode": "ThreeNodes",
      "slaves": [
        {
          "role": "Manager",
          "status": "Normal",
          "ips": [
            "192.168.1.150"
          ],
          "versionInfo": "R2_0.6035.0",
          "managementIPs": [
            "192.168.120.150"
          ],
          "name": "Manager1",
          "id": "644d6bff358563b0",
          "port": 9011
        }
      ],
      "name": "cluster1",
      "id": "532890286353733578"
    },
    "systemVersionName": "EMC ScaleIO Version: R2_0.6035.0",
    "name": "cluster1",
    "id": "076534e45e5ed7ca",
    "links": [
      {
        "rel": "self",
        "href": "/api/instances/System::076534e45e5ed7ca"
      },
      {
        "rel": "/api/System/relationship/Statistics",
        "href": "/api/instances/System::076534e45e5ed7ca/relationships/Statistics"
      },
      {
        "rel": "/api/System/relationship/ProtectionDomain",
        "href": "/api/instances/System::076534e45e5ed7ca/relationships/ProtectionDomain"
      },
      {
        "rel": "/api/System/relationship/Sdc",
        "href": "/api/instances/System::076534e45e5ed7ca/relationships/Sdc"
      },
      {
        "rel": "/api/System/relationship/User",
        "href": "/api/instances/System::076534e45e5ed7ca/relationships/User"
      }
    ]
  }
]
[root@node4 ~]#

As you can see, we’re correctly authenticated, and we can interrogate our ScaleIO environment 

clintkitson commented 8 years ago

@akamalov Sorry for the break in response, we are very much looking into this and should have a response shortly on it.

akamalov commented 8 years ago

@clintonskitson Thanks so much!!

cduchesne commented 8 years ago

@akamalov - is the problem still completely repeatable? if you have time, I'd like to sync up via a desktop share to get some more details about what you're seeing in real-time. I am in the codecommunity slack if you want to chat live as well.

akamalov commented 8 years ago

@cduchesne , Yes it is still happening. Will it be possible to have a desktop share today? Please let me know when you can and I'll provide WebEx link.

Thanks yet again!

cduchesne commented 8 years ago

@akamalov - I am open from 9am to 12pm PST if there is a time in there that works best for you

akamalov commented 8 years ago

@cduchesne Webex is up. Sent you email to your private email account.

akamalov commented 8 years ago

Thanks so much to @cduchesne problem was solved!!! It had to do with how ScaleIO communicates and provisions the volumes. In essence, the communication with ScaleIO gateway happens over management network, while data provisioning happens over data plane. My mistake (very dumb, and it is embarrassing) that I though that both of these operations happen over management network. Once an interface created on a data network and IP assigned, problems have gone away.

Kudos to @cduchesne!!! I bow to my new overlord :)

Thanks Chris!!

cduchesne commented 8 years ago

@akamalov - any time... :)

akamalov commented 8 years ago

Hey @cduchesne,

It looks like the problem has come back, even though networking portion has been corrected (as per last session). Here is the rundown:

  1. Create 1GB volume:
[root@mnode4 network-scripts]# docker volume create --driver=rexray --name=test005 --opt=size=1
  1. Mount volume in a container on node4:
[root@mnode4 network-scripts]# docker run -ti --volume-driver=rexray -v test005:/test busybox
/ # df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  62.0G      8.0G     54.0G  13% /
tmpfs                    15.6G         0     15.6G   0% /dev
tmpfs                    15.6G         0     15.6G   0% /sys/fs/cgroup
/dev/scinia               7.7G     36.0M      7.3G   0% /test
/dev/mapper/vgsys-var
                         62.0G      8.0G     54.0G  13% /etc/resolv.conf
/dev/mapper/vgsys-var
                         62.0G      8.0G     54.0G  13% /etc/hostname
/dev/mapper/vgsys-var
                         62.0G      8.0G     54.0G  13% /etc/hosts
shm                      64.0M         0     64.0M   0% /dev/shm
tmpfs                    15.6G         0     15.6G   0% /proc/kcore
tmpfs                    15.6G         0     15.6G   0% /proc/timer_stats
tmpfs                    15.6G         0     15.6G   0% /proc/sched_debug
/ # ls -al /test
total 4
drwx------    2 root     root          4096 Jul 11 18:01 .
drwxr-xr-x    1 root     root            86 Jul 11 18:05 ..
/ # exit

All is well, volume mounted, we exited the container.

[root@mnode4 network-scripts]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS                     PORTS               NAMES
0722b8583fcc        busybox             "sh"                About a minute ago   Exited (0) 2 seconds ago                       clever_fermi

Remove container:

[root@mnode4 network-scripts]# docker rm 072
072
[root@mnode4 network-scripts]# 

Let's take a look if a volume is currently mounted:

[root@mnode4 network-scripts]# ls -laF /dev/disk/by-id
total 0
drwxr-xr-x 2 root root 540 Jul 11 14:06 ./
drwxr-xr-x 5 root root 100 Jul  8 14:41 ../
lrwxrwxrwx 1 root root   9 Jul  8 14:41 ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001 -> ../../sr0
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-name-vgcore-opt_bsa_bladelogic -> ../../dm-9
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-name-vgcore-users -> ../../dm-7
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-name-vgcore-usr_bltemp -> ../../dm-8
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-name-vgsys-diskdump -> ../../dm-6
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-name-vgsys-root -> ../../dm-1
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-name-vgsys-swap -> ../../dm-0
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-name-vgsys-tmp -> ../../dm-5
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-name-vgsys-var -> ../../dm-4
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-name-vgsys-var_log -> ../../dm-2
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-name-vgsys-var_tmp -> ../../dm-3
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQC36GA4IQGxH7F28dAYO0VAMst8l76nzYz -> ../../dm-4
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCFekHIQ3HXBksyHuS3BldToeDt805jtRu -> ../../dm-6
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCfUA0KLvTrf2WmwYukv5rp5Bfg0pjE5q9 -> ../../dm-2
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCoQBm5xSeUwlsdT2uzsJJ70LXEowD4BHv -> ../../dm-0
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCqaQXUXaAlyhRxIZL0BOPMNyZe2dXNj5d -> ../../dm-5
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCsXItwxaeSznsYV02me9MRZe8IlWXtjus -> ../../dm-3
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCyn7IxT0ZhJBJxsHXrh11SMy00QvlexDZ -> ../../dm-1
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1ASqnpvAoIDSP96RmB1w520GIER5UWyVY -> ../../dm-9
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1kDGXUNH0rv4QdkY1K2jvyyDg7g79csOe -> ../../dm-7
lrwxrwxrwx 1 root root  10 Jul  8 14:41 dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1TlGVVBP9yMpqeVBXwytdVXDDRywdPo7Z -> ../../dm-8
lrwxrwxrwx 1 root root  10 Jul  8 14:41 lvm-pv-uuid-cslnL6-qXaR-xAQB-rnxl-0Se6-KUCR-IocCJz -> ../../sda2
lrwxrwxrwx 1 root root  10 Jul  8 14:41 lvm-pv-uuid-farDq1-WPZp-U3h5-oSPL-cS0C-ajdD-j3odJ7 -> ../../sdb1
lrwxrwxrwx 1 root root   9 Jul  8 14:41 lvm-pv-uuid-rgnlzl-K3Da-U4JE-p97X-2DnY-eiQQ-GbDxE3 -> ../../sdd
lrwxrwxrwx 1 root root   9 Jul  8 14:41 lvm-pv-uuid-ZTA7fL-ZNq6-mc67-1SeG-SeD9-To4F-PiHAnt -> ../../sdc
[root@mnode4 network-scripts]# 

EMC 'scini' is not being listed here. Now, let's head over to node5 and try to mount the volume - test005 on node 5:

[root@mnode5 network-scripts]# docker run -ti --volume-driver=rexray -v test005:/test busybox
docker: Error response from daemon: VolumeDriver.Mount: {"Error":"error waiting on volume to mount"}.
[root@mnode5 network-scripts]# 

Hmm.. let's take a look if we have proper IP address:

[root@mslave5 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: 192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:9a:0d:ad brd ff:ff:ff:ff:ff:ff
    inet 192.168.120.165/26 brd 192.168.120.191 scope global 192
       valid_lft forever preferred_lft forever
3: 224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:9a:49:78 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.165/24 brd 192.168.1.255 scope global 224
       valid_lft forever preferred_lft forever

Let's ping our MDMs(192.168.1.150, 192.168.151):

[root@mnodee5 ~]# ping -c 3 192.168.1.150
PING 192.168.1.150 (192.168.1.150) 56(84) bytes of data.
64 bytes from 192.168.1.150: icmp_seq=1 ttl=64 time=1.05 ms
64 bytes from 192.168.1.150: icmp_seq=2 ttl=64 time=2.22 ms
64 bytes from 192.168.1.150: icmp_seq=3 ttl=64 time=0.343 ms

--- 192.168.1.150 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.343/1.209/2.228/0.777 ms
[root@mslave5 ~]# ping -c 3 192.168.1.151
PING 192.168.1.151 (192.168.1.151) 56(84) bytes of data.
64 bytes from 192.168.1.151: icmp_seq=1 ttl=64 time=0.950 ms
64 bytes from 192.168.1.151: icmp_seq=2 ttl=64 time=0.395 ms
64 bytes from 192.168.1.151: icmp_seq=3 ttl=64 time=0.382 ms

--- 192.168.1.151 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.382/0.575/0.950/0.266 ms
[root@mnode5 ~]# 

Ping to both MDMs are working fine. Let's display by-disk-id:

[root@mnode5 network-scripts]# ls -laF /dev/disk/by-id
total 0
drwxr-xr-x 2 root root 540 Jul 11 08:58 ./
drwxr-xr-x 5 root root 100 Jul 11 08:58 ../
lrwxrwxrwx 1 root root   9 Jul 11 08:58 ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001 -> ../../sr0
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgcore-opt_bsa_bladelogic -> ../../dm-9
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgcore-users -> ../../dm-7
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgcore-usr_bltemp -> ../../dm-8
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgsys-diskdump -> ../../dm-6
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgsys-root -> ../../dm-1
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgsys-swap -> ../../dm-0
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgsys-tmp -> ../../dm-5
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgsys-var -> ../../dm-4
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgsys-var_log -> ../../dm-2
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgsys-var_tmp -> ../../dm-3
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQC36GA4IQGxH7F28dAYO0VAMst8l76nzYz -> ../../dm-4
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCFekHIQ3HXBksyHuS3BldToeDt805jtRu -> ../../dm-6
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCfUA0KLvTrf2WmwYukv5rp5Bfg0pjE5q9 -> ../../dm-2
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCoQBm5xSeUwlsdT2uzsJJ70LXEowD4BHv -> ../../dm-0
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCqaQXUXaAlyhRxIZL0BOPMNyZe2dXNj5d -> ../../dm-5
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCsXItwxaeSznsYV02me9MRZe8IlWXtjus -> ../../dm-3
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCyn7IxT0ZhJBJxsHXrh11SMy00QvlexDZ -> ../../dm-1
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1ASqnpvAoIDSP96RmB1w520GIER5UWyVY -> ../../dm-9
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1kDGXUNH0rv4QdkY1K2jvyyDg7g79csOe -> ../../dm-7
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1TlGVVBP9yMpqeVBXwytdVXDDRywdPo7Z -> ../../dm-8
lrwxrwxrwx 1 root root   9 Jul 11 08:58 lvm-pv-uuid-40qKHi-PeId-Go8A-Eei1-0AZO-63zH-K51JXh -> ../../sdd
lrwxrwxrwx 1 root root  10 Jul 11 08:58 lvm-pv-uuid-cslnL6-qXaR-xAQB-rnxl-0Se6-KUCR-IocCJz -> ../../sda2
lrwxrwxrwx 1 root root  10 Jul 11 08:58 lvm-pv-uuid-farDq1-WPZp-U3h5-oSPL-cS0C-ajdD-j3odJ7 -> ../../sdb1
lrwxrwxrwx 1 root root   9 Jul 11 08:58 lvm-pv-uuid-GqsrcW-Bo4p-R3k5-pZ4l-piL1-6f6H-6Lx21a -> ../../sdc
[root@mnode5 network-scripts]# 

Under node5, disk is not listed. Let's issue lsblk:

[root@mslave5 ~]# lsblk
NAME                          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
fd0                             2:0    1    4K  0 disk 
sda                             8:0    0   64G  0 disk 
├─sda1                          8:1    0  512M  0 part /boot
└─sda2                          8:2    0 63.5G  0 part 
  ├─vgsys-swap                253:0    0   12G  0 lvm  [SWAP]
  ├─vgsys-root                253:1    0  9.8G  0 lvm  /
  ├─vgsys-var_log             253:2    0   31G  0 lvm  /var/log
  ├─vgsys-var_tmp             253:3    0    2G  0 lvm  /var/tmp
  ├─vgsys-var                 253:4    0   62G  0 lvm  /var
  ├─vgsys-tmp                 253:5    0  4.9G  0 lvm  /tmp
  └─vgsys-diskdump            253:6    0   10G  0 lvm  /diskdump
sdb                             8:16   0   32G  0 disk 
└─sdb1                          8:17   0   32G  0 part 
  ├─vgcore-users              253:7    0    3G  0 lvm  /users
  ├─vgcore-usr_bltemp         253:8    0    2G  0 lvm  /usr/bltemp
  └─vgcore-opt_bsa_bladelogic 253:9    0    2G  0 lvm  /opt/bsa/bladelogic
sdc                             8:32   0   30G  0 disk 
├─vgsys-var_log               253:2    0   31G  0 lvm  /var/log
└─vgsys-var                   253:4    0   62G  0 lvm  /var
sdd                             8:48   0   40G  0 disk 
└─vgsys-var                   253:4    0   62G  0 lvm  /var
sr0                            11:0    1 1024M  0 rom  
[root@mslave5 ~]# 

Command 'lsblk' is still not being displayed. Let's go ahead and run sio_report:

[root@mnode5 ~]# ./sio-report https://192.168.120.166 admin XXXXXXX
{
    "systems": [
        {
            "name": "cluster1",
            "id": "076534e45e5ed7ca",
            "version": "R2_0.6035.0",
            "mdms": [
                {
                    "name": "4cbb452b1d5f7f81",
                    "id": "Manager2",
                    "role": "Manager",
                    "version": "R2_0.6035.0"
                },
                {
                    "name": "644d6bff358563b0",
                    "id": "Manager1",
                    "role": "Manager",
                    "version": "R2_0.6035.0"
                },
                {
                    "name": "4fc0530874c9b832",
                    "id": "TB1",
                    "role": "TieBreaker",
                    "version": "R2_0.6035.0"
                }
            ]
        }
    ],
    "storage_pools": [
        {
            "name": "pool1",
            "id": "907f19e300000000",
            "protection_domain_id": "33399d2500000000"
        }
    ],
    "sdcs": [
        {
            "name": "",
            "id": "d50382690000000b",
            "guid": "0E3679B1-5985-49DD-A52A-A7A33D78D1FC",
            "ip": "192.168.120.164",
            "state": "Connected",
            "version": "R2_0.6035.0"
        },
        {
            "name": "",
            "id": "d503826a0000000c",
            "guid": "FC3602E2-B3A2-4D5B-B6AA-721E1603C51D",
            "ip": "192.168.120.161",
            "state": "Disconnected",
            "version": ""
        },
        {
            "name": "",
            "id": "d503826700000009",
            "guid": "F378F48C-A093-4632-9C48-DA2F42A3B699",
            "ip": "192.168.120.160",
            "state": "Connected",
            "version": "R2_0.6035.0"
        },
        {
            "name": "",
            "id": "d503826600000008",
            "guid": "13892CAC-1964-43A6-9E80-BE584696DE06",
            "ip": "192.168.120.159",
            "state": "Connected",
            "version": "R2_0.6035.0"
        },
        {
            "name": "",
            "id": "d503826500000007",
            "guid": "41C4D39A-0940-4B59-B20B-1AA8974DF98E",
            "ip": "192.168.120.158",
            "state": "Connected",
            "version": "R2_0.6035.0"
        },
        {
            "name": "",
            "id": "d503826400000006",
            "guid": "90024D18-61C1-436E-A39A-98D4940F4EF9",
            "ip": "192.168.120.157",
            "state": "Connected",
            "version": "R2_0.6035.0"
        },
        {
            "name": "",
            "id": "d503826300000005",
            "guid": "B1B5BB10-F967-4901-91B0-65A64D3F1E44",
            "ip": "192.168.120.156",
            "state": "Connected",
            "version": "R2_0.6035.0"
        },
        {
            "name": "",
            "id": "d503826b0000000d",
            "guid": "28A19C7E-636A-4020-A674-5796BC341D8B",
            "ip": "192.168.120.163",
            "state": "Disconnected",
            "version": ""
        },
        {
            "name": "",
            "id": "d50382680000000a",
            "guid": "43B2EC5F-C45F-4049-8020-49A175881EC0",
            "ip": "192.168.120.162",
            "state": "Connected",
            "version": "R2_0.6035.0"
        },
        {
            "name": "",
            "id": "d503826c0000000e",
            "guid": "99222D6D-0BED-4DAA-994E-D76D7CD25F08",
            "ip": "192.168.120.165",
            "state": "Disconnected",
            "version": ""
        },
        {
            "name": "ESX-192.168.120.139",
            "id": "d5035b5600000003",
            "guid": "E49006E9-9782-4627-8236-17D4DA013C11",
            "ip": "192.168.120.139",
            "state": "Connected",
            "version": "R2_0.6008.0"
        },
        {
            "name": "ESX-192.168.120.141",
            "id": "d5035b5400000001",
            "guid": "9A6945F2-75AF-46B3-965C-24F8E7E4B7FA",
            "ip": "192.168.120.141",
            "state": "Connected",
            "version": "R2_0.6008.0"
        },
        {
            "name": "ESX-192.168.120.143",
            "id": "d5035b5300000000",
            "guid": "7ED3196F-BC9F-450C-9BDF-6CFF329132D2",
            "ip": "192.168.120.143",
            "state": "Connected",
            "version": "R2_0.6008.0"
        },
        {
            "name": "ESX-192.168.120.145",
            "id": "d5035b5500000002",
            "guid": "CA1E5009-3DBD-471E-ADD9-AAB1DC0D19F4",
            "ip": "192.168.120.145",
            "state": "Connected",
            "version": "R2_0.6008.0"
        },
        {
            "name": "ESX-192.168.120.147",
            "id": "d5035b5700000004",
            "guid": "BFA1C14B-31BD-4D99-8D0F-D5A8FCF91FFE",
            "ip": "192.168.120.147",
            "state": "Connected",
            "version": "R2_0.6008.0"
        }
    ],
    "volumes": [
        {
            "name": "influxdb",
            "id": "c047f23800000001",
            "size": 16,
            "storagepool_id": "907f19e300000000",
            "thin": true,
            "mapped_sdcs": [
                {
                    "name": "",
                    "id": "d503826a0000000c",
                    "guid": "FC3602E2-B3A2-4D5B-B6AA-721E1603C51D",
                    "ip": "192.168.120.161",
                    "limit_bw_mbps": 0,
                    "limit_iops": 0
                }
            ]
        },
        {
            "name": "mongo005",
            "id": "c047cb2800000002",
            "size": 16,
            "storagepool_id": "907f19e300000000",
            "thin": true,
            "mapped_sdcs": [
                {
                    "name": "",
                    "id": "d50382680000000a",
                    "guid": "43B2EC5F-C45F-4049-8020-49A175881EC0",
                    "ip": "192.168.120.162",
                    "limit_bw_mbps": 0,
                    "limit_iops": 0
                }
            ]
        },
        {
            "name": "scaleio-ds1",
            "id": "c047a41700000000",
            "size": 4096,
            "storagepool_id": "907f19e300000000",
            "thin": true,
            "mapped_sdcs": [
                {
                    "name": "ESX-192.168.120.147",
                    "id": "d5035b5700000004",
                    "guid": "BFA1C14B-31BD-4D99-8D0F-D5A8FCF91FFE",
                    "ip": "192.168.120.147",
                    "limit_bw_mbps": 0,
                    "limit_iops": 0
                },
                {
                    "name": "ESX-192.168.120.139",
                    "id": "d5035b5600000003",
                    "guid": "E49006E9-9782-4627-8236-17D4DA013C11",
                    "ip": "192.168.120.139",
                    "limit_bw_mbps": 0,
                    "limit_iops": 0
                },
                {
                    "name": "ESX-192.168.120.143",
                    "id": "d5035b5300000000",
                    "guid": "7ED3196F-BC9F-450C-9BDF-6CFF329132D2",
                    "ip": "192.168.120.143",
                    "limit_bw_mbps": 0,
                    "limit_iops": 0
                },
                {
                    "name": "ESX-192.168.120.145",
                    "id": "d5035b5500000002",
                    "guid": "CA1E5009-3DBD-471E-ADD9-AAB1DC0D19F4",
                    "ip": "192.168.120.145",
                    "limit_bw_mbps": 0,
                    "limit_iops": 0
                },
                {
                    "name": "ESX-192.168.120.141",
                    "id": "d5035b5400000001",
                    "guid": "9A6945F2-75AF-46B3-965C-24F8E7E4B7FA",
                    "ip": "192.168.120.141",
                    "limit_bw_mbps": 0,
                    "limit_iops": 0
                }
            ]
        },
        {
            "name": "test005",
            "id": "c047f24300000003",
            "size": 8,
            "storagepool_id": "907f19e300000000",
            "thin": true,
            "mapped_sdcs": [
                {
                    "name": "",
                    "id": "d503826c0000000e",
                    "guid": "99222D6D-0BED-4DAA-994E-D76D7CD25F08",
                    "ip": "192.168.120.165",
                    "limit_bw_mbps": 0,
                    "limit_iops": 0
                }
            ]
        }
    ]
}
[root@mnode5 ~]# 

Whoa! sio_report is stating that volume "test005" is mapped to node5! Let's try to unmount it:

ScaleIO-192-168-120-150:~ # scli --mdm_ip 192.168.120.151 --volume_name test005 --unmap_volume_from_sdc --sdc_ip 192.168.120.165 --i_am_sure
Successfully un-mapped volume test005 from SDC 192.168.120.165
ScaleIO-192-168-120-150:~ # 

Yep, volume was indeed mapped and it is currently unmounted. Now, let's go ahead and try to mount it again:

[root@mnode5 network-scripts]# docker run -ti --volume-driver=rexray -v test005:/test busybox
docker: Error response from daemon: VolumeDriver.Mount: {"Error":"error waiting on volume to mount"}.
                                                                                                     [root@mnode5 network-scripts]# 
[root@mnode5 network-scripts]# lsblk
NAME                          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
fd0                             2:0    1    4K  0 disk 
sda                             8:0    0   64G  0 disk 
├─sda1                          8:1    0  512M  0 part /boot
└─sda2                          8:2    0 63.5G  0 part 
  ├─vgsys-swap                253:0    0   12G  0 lvm  [SWAP]
  ├─vgsys-root                253:1    0  9.8G  0 lvm  /
  ├─vgsys-var_log             253:2    0   31G  0 lvm  /var/log
  ├─vgsys-var_tmp             253:3    0    2G  0 lvm  /var/tmp
  ├─vgsys-var                 253:4    0   62G  0 lvm  /var
  ├─vgsys-tmp                 253:5    0  4.9G  0 lvm  /tmp
  └─vgsys-diskdump            253:6    0   10G  0 lvm  /diskdump
sdb                             8:16   0   32G  0 disk 
└─sdb1                          8:17   0   32G  0 part 
  ├─vgcore-users              253:7    0    3G  0 lvm  /users
  ├─vgcore-usr_bltemp         253:8    0    2G  0 lvm  /usr/bltemp
  └─vgcore-opt_bsa_bladelogic 253:9    0    2G  0 lvm  /opt/bsa/bladelogic
sdc                             8:32   0   30G  0 disk 
├─vgsys-var_log               253:2    0   31G  0 lvm  /var/log
└─vgsys-var                   253:4    0   62G  0 lvm  /var
sdd                             8:48   0   40G  0 disk 
└─vgsys-var                   253:4    0   62G  0 lvm  /var
sr0                            11:0    1 1024M  0 rom  

Let's display by disk-id:

[root@mnode5 network-scripts]# ls -laF /dev/disk/by-id
total 0
drwxr-xr-x 2 root root 540 Jul 11 08:58 ./
drwxr-xr-x 5 root root 100 Jul 11 08:58 ../
lrwxrwxrwx 1 root root   9 Jul 11 08:58 ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001 -> ../../sr0
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgcore-opt_bsa_bladelogic -> ../../dm-9
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgcore-users -> ../../dm-7
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgcore-usr_bltemp -> ../../dm-8
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgsys-diskdump -> ../../dm-6
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgsys-root -> ../../dm-1
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgsys-swap -> ../../dm-0
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgsys-tmp -> ../../dm-5
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgsys-var -> ../../dm-4
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgsys-var_log -> ../../dm-2
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-name-vgsys-var_tmp -> ../../dm-3
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQC36GA4IQGxH7F28dAYO0VAMst8l76nzYz -> ../../dm-4
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCFekHIQ3HXBksyHuS3BldToeDt805jtRu -> ../../dm-6
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCfUA0KLvTrf2WmwYukv5rp5Bfg0pjE5q9 -> ../../dm-2
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCoQBm5xSeUwlsdT2uzsJJ70LXEowD4BHv -> ../../dm-0
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCqaQXUXaAlyhRxIZL0BOPMNyZe2dXNj5d -> ../../dm-5
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCsXItwxaeSznsYV02me9MRZe8IlWXtjus -> ../../dm-3
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-42Y1cY4mx4WewO0yV1xpDwUat1WbMpQCyn7IxT0ZhJBJxsHXrh11SMy00QvlexDZ -> ../../dm-1
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1ASqnpvAoIDSP96RmB1w520GIER5UWyVY -> ../../dm-9
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1kDGXUNH0rv4QdkY1K2jvyyDg7g79csOe -> ../../dm-7
lrwxrwxrwx 1 root root  10 Jul 11 08:58 dm-uuid-LVM-SDP2hRvacRwfCTKaKUyBgDSLIRKqXKE1TlGVVBP9yMpqeVBXwytdVXDDRywdPo7Z -> ../../dm-8
lrwxrwxrwx 1 root root   9 Jul 11 08:58 lvm-pv-uuid-40qKHi-PeId-Go8A-Eei1-0AZO-63zH-K51JXh -> ../../sdd
lrwxrwxrwx 1 root root  10 Jul 11 08:58 lvm-pv-uuid-cslnL6-qXaR-xAQB-rnxl-0Se6-KUCR-IocCJz -> ../../sda2
lrwxrwxrwx 1 root root  10 Jul 11 08:58 lvm-pv-uuid-farDq1-WPZp-U3h5-oSPL-cS0C-ajdD-j3odJ7 -> ../../sdb1
lrwxrwxrwx 1 root root   9 Jul 11 08:58 lvm-pv-uuid-GqsrcW-Bo4p-R3k5-pZ4l-piL1-6f6H-6Lx21a -> ../../sdc
[root@mnode5 network-scripts]# 

Still no go :(

cduchesne commented 8 years ago

I thought your scaleio data network was a 10.x network? I see 2 192.168 networks in your network list for node5. Is that not the case?

akamalov commented 8 years ago

@cduchesne it is.. I just have mask it here because of public presence. But you're right.

cduchesne commented 8 years ago

ohh got it.

well the sdc does show as disconnected

{
            "name": "",
            "id": "d503826c0000000e",
            "guid": "99222D6D-0BED-4DAA-994E-D76D7CD25F08",
            "ip": "192.168.120.165",
            "state": "Disconnected",
            "version": ""
},

perhaps try service scini restart?

akamalov commented 8 years ago

will do

akamalov commented 8 years ago

Just did. The same result :(

Restarting scini:

[root@node5 network-scripts]# 
[root@node5 network-scripts]# service scini stop
Stopping scini (via systemctl):                            [  OK  ]
[root@node5 network-scripts]# service scini start
Starting scini (via systemctl):                            [  OK  ]

Attempting to mount volume:

[root@node5 network-scripts]# docker run -ti --volume-driver=rexray -v test005:/test busybox
docker: Error response from daemon: VolumeDriver.Mount: {"Error":"no device name returned"}.
                                                                                            [root@node5 network-scripts]# 

Re-running sio_report"

[root@node5 ~]# ./sio-report https://192.168.120.166 admin XXXXXXX
{
    "systems": [
        {
            "name": "cluster1",
            "id": "076534e45e5ed7ca",
            "version": "R2_0.6035.0",
            "mdms": [
                {
                    "name": "4cbb452b1d5f7f81",
                    "id": "Manager2",
                    "role": "Manager",
                    "version": "R2_0.6035.0"
                },
                {
                    "name": "644d6bff358563b0",
                    "id": "Manager1",
                    "role": "Manager",
                    "version": "R2_0.6035.0"
                },
                {
                    "name": "4fc0530874c9b832",
                    "id": "TB1",
                    "role": "TieBreaker",
                    "version": "R2_0.6035.0"
                }
            ]
        }
    ],
    "storage_pools": [
        {
            "name": "pool1",
            "id": "907f19e300000000",
            "protection_domain_id": "33399d2500000000"
        }
    ],
    "sdcs": [
        {
            "name": "",
            "id": "d50382690000000b",
            "guid": "0E3679B1-5985-49DD-A52A-A7A33D78D1FC",
            "ip": "192.168.120.164",
            "state": "Connected",
            "version": "R2_0.6035.0"
        },
        {
            "name": "",
            "id": "d503826a0000000c",
            "guid": "FC3602E2-B3A2-4D5B-B6AA-721E1603C51D",
            "ip": "192.168.120.161",
            "state": "Disconnected",
            "version": ""
        },
        {
            "name": "",
            "id": "d503826700000009",
            "guid": "F378F48C-A093-4632-9C48-DA2F42A3B699",
            "ip": "192.168.120.160",
            "state": "Connected",
            "version": "R2_0.6035.0"
        },
        {
            "name": "",
            "id": "d503826600000008",
            "guid": "13892CAC-1964-43A6-9E80-BE584696DE06",
            "ip": "192.168.120.159",
            "state": "Connected",
            "version": "R2_0.6035.0"
        },
        {
            "name": "",
            "id": "d503826500000007",
            "guid": "41C4D39A-0940-4B59-B20B-1AA8974DF98E",
            "ip": "192.168.120.158",
            "state": "Connected",
            "version": "R2_0.6035.0"
        },
        {
            "name": "",
            "id": "d503826400000006",
            "guid": "90024D18-61C1-436E-A39A-98D4940F4EF9",
            "ip": "192.168.120.157",
            "state": "Connected",
            "version": "R2_0.6035.0"
        },
        {
            "name": "",
            "id": "d503826300000005",
            "guid": "B1B5BB10-F967-4901-91B0-65A64D3F1E44",
            "ip": "192.168.120.156",
            "state": "Connected",
            "version": "R2_0.6035.0"
        },
        {
            "name": "",
            "id": "d503826b0000000d",
            "guid": "28A19C7E-636A-4020-A674-5796BC341D8B",
            "ip": "192.168.120.163",
            "state": "Disconnected",
            "version": ""
        },
        {
            "name": "",
            "id": "d503826c0000000e",
            "guid": "99222D6D-0BED-4DAA-994E-D76D7CD25F08",
            "ip": "192.168.120.165",
            "state": "Disconnected",
            "version": ""
        },
        {
            "name": "",
            "id": "d50382680000000a",
            "guid": "43B2EC5F-C45F-4049-8020-49A175881EC0",
            "ip": "192.168.120.162",
            "state": "Connected",
            "version": "R2_0.6035.0"
        },
        {
            "name": "ESX-192.168.120.139",
            "id": "d5035b5600000003",
            "guid": "E49006E9-9782-4627-8236-17D4DA013C11",
            "ip": "192.168.120.139",
            "state": "Connected",
            "version": "R2_0.6008.0"
        },
        {
            "name": "ESX-192.168.120.141",
            "id": "d5035b5400000001",
            "guid": "9A6945F2-75AF-46B3-965C-24F8E7E4B7FA",
            "ip": "192.168.120.141",
            "state": "Connected",
            "version": "R2_0.6008.0"
        },
        {
            "name": "ESX-192.168.120.143",
            "id": "d5035b5300000000",
            "guid": "7ED3196F-BC9F-450C-9BDF-6CFF329132D2",
            "ip": "192.168.120.143",
            "state": "Connected",
            "version": "R2_0.6008.0"
        },
        {
            "name": "ESX-192.168.120.145",
            "id": "d5035b5500000002",
            "guid": "CA1E5009-3DBD-471E-ADD9-AAB1DC0D19F4",
            "ip": "192.168.120.145",
            "state": "Connected",
            "version": "R2_0.6008.0"
        },
        {
            "name": "ESX-192.168.120.147",
            "id": "d5035b5700000004",
            "guid": "BFA1C14B-31BD-4D99-8D0F-D5A8FCF91FFE",
            "ip": "192.168.120.147",
            "state": "Connected",
            "version": "R2_0.6008.0"
        }
    ],
    "volumes": [
        {
            "name": "influxdb",
            "id": "c047f23800000001",
            "size": 16,
            "storagepool_id": "907f19e300000000",
            "thin": true,
            "mapped_sdcs": [
                {
                    "name": "",
                    "id": "d503826a0000000c",
                    "guid": "FC3602E2-B3A2-4D5B-B6AA-721E1603C51D",
                    "ip": "192.168.120.161",
                    "limit_bw_mbps": 0,
                    "limit_iops": 0
                }
            ]
        },
        {
            "name": "mongo005",
            "id": "c047cb2800000002",
            "size": 16,
            "storagepool_id": "907f19e300000000",
            "thin": true,
            "mapped_sdcs": [
                {
                    "name": "",
                    "id": "d50382680000000a",
                    "guid": "43B2EC5F-C45F-4049-8020-49A175881EC0",
                    "ip": "192.168.120.162",
                    "limit_bw_mbps": 0,
                    "limit_iops": 0
                }
            ]
        },
        {
            "name": "scaleio-ds1",
            "id": "c047a41700000000",
            "size": 4096,
            "storagepool_id": "907f19e300000000",
            "thin": true,
            "mapped_sdcs": [
                {
                    "name": "ESX-192.168.120.147",
                    "id": "d5035b5700000004",
                    "guid": "BFA1C14B-31BD-4D99-8D0F-D5A8FCF91FFE",
                    "ip": "192.168.120.147",
                    "limit_bw_mbps": 0,
                    "limit_iops": 0
                },
                {
                    "name": "ESX-192.168.120.139",
                    "id": "d5035b5600000003",
                    "guid": "E49006E9-9782-4627-8236-17D4DA013C11",
                    "ip": "192.168.120.139",
                    "limit_bw_mbps": 0,
                    "limit_iops": 0
                },
                {
                    "name": "ESX-192.168.120.143",
                    "id": "d5035b5300000000",
                    "guid": "7ED3196F-BC9F-450C-9BDF-6CFF329132D2",
                    "ip": "192.168.120.143",
                    "limit_bw_mbps": 0,
                    "limit_iops": 0
                },
                {
                    "name": "ESX-192.168.120.145",
                    "id": "d5035b5500000002",
                    "guid": "CA1E5009-3DBD-471E-ADD9-AAB1DC0D19F4",
                    "ip": "192.168.120.145",
                    "limit_bw_mbps": 0,
                    "limit_iops": 0
                },
                {
                    "name": "ESX-192.168.120.141",
                    "id": "d5035b5400000001",
                    "guid": "9A6945F2-75AF-46B3-965C-24F8E7E4B7FA",
                    "ip": "192.168.120.141",
                    "limit_bw_mbps": 0,
                    "limit_iops": 0
                }
            ]
        },
        {
            "name": "test005",
            "id": "c047f24300000003",
            "size": 8,
            "storagepool_id": "907f19e300000000",
            "thin": true,
            "mapped_sdcs": [
                {
                    "name": "",
                    "id": "d503826c0000000e",
                    "guid": "99222D6D-0BED-4DAA-994E-D76D7CD25F08",
                    "ip": "192.168.120.165",
                    "limit_bw_mbps": 0,
                    "limit_iops": 0
                }
            ]
        }
    ]
}

node5 - 192.168.120.165 still showing disconnected...

cduchesne commented 8 years ago

hmm if you manually mapped it with scli, are you able to do anything with it? it seems to me like there is a network problem still. do you have time for a quick webex? send it my way

akamalov commented 8 years ago

One moment. Generating WebEx session

akamalov commented 8 years ago

Sent

akamalov commented 8 years ago

@cduchesne - Thank You! Thank You! Thank you!!!