hpe-storage / python-hpedockerplugin

HPE Native Docker Plugin
Apache License 2.0
36 stars 64 forks source link

Docker 3.1: Error while mounting SnapShot #523

Closed sonawane-shashikant closed 5 years ago

sonawane-shashikant commented 5 years ago

Test Bed - [stack@manager ~]$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.6 (Maipo) [stack@manager ~]$

[stack@manager ~]$ docker plugin inspect hpe | grep PluginRef "PluginReference": "docker.io/hpestorage/hpedockervolumeplugin:3.1", [stack@manager ~]$

CSIM-EOS07_1611168 cli% showversion Release version 3.3.1 (MU2) Patches: P63

Component Name Version CLI Server 3.3.1 (MU2) CLI Client 3.3.1 System Manager 3.3.1 (P63) Kernel 3.3.1 (MU2) TPD Kernel Code 3.3.1 (P63) TPD Kernel Patch 3.3.1 (P63) CSIM-EOS07_1611168 cli%

Steps to reproduce -

  1. Create replicated volume
  2. Create a snapshot to the replicated volume.
  3. Create the snapshot scheduling for the snapshot.
  4. Try to mount the snapshot and verify the error message.

Expected Results - Snapshot should mount successfully with appropriate mount path.

Actual Result - Snapshot mount failed with the error message but mounting is successful on both docker as well as 3PAR (Note: Error is as below ceb16d257a2ea39d2a09e6a58944b24f73daa9eb6343ec6cc18ba0d65ac2e54a docker: Error response from daemon: error while mounting volume '': Post http://%2Frun%2Fdocker%2Fplugins%2F2f7045d3408bd0770f22c7d3dca333aefbaf90f391183059022144f598cdf01b%2Fhpe.sock/VolumeDriver.Mount: net/http: request canceled (Client.Timeout exceeded while awaiting headers).)

Detailed output -

snapshot:

[stack@manager ~]$ docker volume create -d hpe --name Test-vol -o replicationGroup=RCG Test-vol [stack@manager ~]$ docker volume inspect Test-vol [ { "CreatedAt": "0001-01-01T00:00:00Z", "Driver": "hpe:latest", "Labels": {}, "Mountpoint": "", "Name": "Test-vol", "Options": { "replicationGroup": "RCG" }, "Scope": "global", "Status": { "rcg_detail": { "policies": { "autoFailover": false, "autoRecover": false, "overPeriodAlert": false, "pathManagement": false }, "rcg_name": "RCG", "role": "Primary" }, "volume_detail": { "3par_vol_name": "dcv-xmp.Hos2T9WbxTHbpm8fKQ", "backend": "DEFAULT", "compression": null, "cpg": "RT_SRC_CPG", "domain": "RT_DOMAIN", "flash_cache": null, "fsMode": null, "fsOwner": null, "mountConflictDelay": 30, "provisioning": "thin", "secondary_cpg": "RT_DEST_CPG", "secondary_snap_cpg": "RT_DEST_SNAP_CPG", "size": 100, "snap_cpg": "RT_SRC_SNAP_CPG" } } } ] [stack@manager ~]$ ls -lrt /dev/disk/by-path total 0 lrwxrwxrwx. 1 root root 9 Mar 12 20:16 pci-0000:03:00.0-scsi-0:1:0:0 -> ../../sda lrwxrwxrwx. 1 root root 10 Mar 12 20:16 pci-0000:03:00.0-scsi-0:1:0:0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Mar 12 20:16 pci-0000:03:00.0-scsi-0:1:0:0-part3 -> ../../sda3 lrwxrwxrwx. 1 root root 10 Mar 12 20:16 pci-0000:03:00.0-scsi-0:1:0:0-part2 -> ../../sda2 [stack@manager ~]$ docker volume create -d hpe --name Snapshot -o virtualCopyOf=Test-vol -o scheduleFrequency="10 " -o scheduleName=Schedule-Name -o snapshotPrefix=shashi Snapshot [stack@manager ~]$ docker volume inspect Snapshot [ { "CreatedAt": "0001-01-01T00:00:00Z", "Driver": "hpe:latest", "Labels": {}, "Mountpoint": "", "Name": "Snapshot", "Options": { "scheduleFrequency": "10 ", "scheduleName": "Schedule-Name", "snapshotPrefix": "shashi", "virtualCopyOf": "Test-vol" }, "Scope": "global", "Status": { "snap_detail": { "3par_vol_name": "dcs-oRz8-RXbTUWMECasIbY6PQ", "backend": "DEFAULT", "compression": null, "expiration_hours": null, "fsMode": null, "fsOwner": null, "is_snap": true, "mountConflictDelay": 30, "parent_id": "c66a7e1e-8b36-4fd5-9bc5-31dba66f1f29", "parent_volume": "Test-vol", "provisioning": "thin", "retention_hours": null, "size": 100, "snap_cpg": "RT_SRC_SNAP_CPG", "snap_schedule": { "sched_frequency": "10 ", "sched_snap_exp_hrs": null, "sched_snap_ret_hrs": null, "schedule_name": "Schedule-Name", "snap_name_prefix": "shashi" } } } } ] CSIM-EOS07_1611168 cli% showvv -cpg RT_SRC_CPG -showcols Name,Comment Name Comment
dcv-xmp.Hos2T9WbxTHbpm8fKQ {"volume_id": "c66a7e1e-8b36-4fd5-9bc5-31dba66f1f29", "name": "c66a7e1e-8b36-4fd5-9bc5-31dba66f1f29", "type": "Docker", "display_name": "Test-vol"} osv--Cm-jXxqTSe0bDW1xt1g5A {"volume_type_name": "3pariscsi_1", "display_name": "v1", "name": "volume-fc29bf8d-7c6a-4d27-b46c-35b5c6dd60e4", "volume_type_id": "ed6b265a-8f7a-4d9b-8491-9814359bb076", "volume_id": "fc29bf8d-7c6a-4d27-b46c-35b5c6dd60e4", "qos": {}, "type": "OpenStack"}

total
CSIM-EOS07_1611168 cli% showvv -cpg RT_SRC_SNAP_CPG -showcols Name,Comment Name Comment
dcv-xmp.Hos2T9WbxTHbpm8fKQ {"volume_id": "c66a7e1e-8b36-4fd5-9bc5-31dba66f1f29", "name": "c66a7e1e-8b36-4fd5-9bc5-31dba66f1f29", "type": "Docker", "display_name": "Test-vol"} dcs-oRz8-RXbTUWMECasIbY6PQ {"volume_name": "Test-vol", "volume_id": "c66a7e1e-8b36-4fd5-9bc5-31dba66f1f29", "display_name": "Snapshot", "description": "snapshot of volume Test-vol"} osv--Cm-jXxqTSe0bDW1xt1g5A {"volume_type_name": "3pariscsi_1", "display_name": "v1", "name": "volume-fc29bf8d-7c6a-4d27-b46c-35b5c6dd60e4", "volume_type_id": "ed6b265a-8f7a-4d9b-8491-9814359bb076", "volume_id": "fc29bf8d-7c6a-4d27-b46c-35b5c6dd60e4", "qos": {}, "type": "OpenStack"} oss-3ZA.drcqQrO2jPLlrt36Jg {"volume_id": "fc29bf8d-7c6a-4d27-b46c-35b5c6dd60e4", "display_name": "s1", "description": null, "volume_name": "volume-fc29bf8d-7c6a-4d27-b46c-35b5c6dd60e4"} osv-dQ68H.0rROCWksBYaAIoGA {"snapshot_id": "dd903e76-b72a-42b3-b68c-f2e5aeddfa26", "display_name": "v2", "volume_id": "750ebc1f-ed2b-44e0-9692-c05868022818"}

total
[stack@manager ~]$ sudo docker run -it -d -v Snapshot:/data1 --volume-driver hpe --name mounter3 --rm busybox /bin/sh ceb16d257a2ea39d2a09e6a58944b24f73daa9eb6343ec6cc18ba0d65ac2e54a docker: Error response from daemon: error while mounting volume '': Post http://%2Frun%2Fdocker%2Fplugins%2F2f7045d3408bd0770f22c7d3dca333aefbaf90f391183059022144f598cdf01b%2Fhpe.sock/VolumeDriver.Mount: net/http: request canceled (Client.Timeout exceeded while awaiting headers).

[stack@manager ~]$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 16162e0451d7 swarm:latest "/swarm manage --tls…" 3 days ago Restarting (1) 26 seconds ago swarm-agent-master c1aaadbf438e quay.io/coreos/etcd:v2.2.1 "/etcd -name etcd0 -…" 7 days ago Up 43 hours 0.0.0.0:2379-2380->2379-2380/tcp, 0.0.0.0:4001->4001/tcp, 7001/tcp etcd [stack@manager ~]$

[stack@manager ~]$ ls -lrt /dev/disk/by-path total 0 lrwxrwxrwx. 1 root root 9 Mar 12 20:16 pci-0000:03:00.0-scsi-0:1:0:0 -> ../../sda lrwxrwxrwx. 1 root root 10 Mar 12 20:16 pci-0000:03:00.0-scsi-0:1:0:0-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 10 Mar 12 20:16 pci-0000:03:00.0-scsi-0:1:0:0-part3 -> ../../sda3 lrwxrwxrwx. 1 root root 10 Mar 12 20:16 pci-0000:03:00.0-scsi-0:1:0:0-part2 -> ../../sda2 lrwxrwxrwx. 1 root root 9 Mar 22 12:17 ip-10.50.17.223:3260-iscsi-iqn.2000-05.com.3pardata:21220002ac002ba0-lun-0 -> ../../sdb lrwxrwxrwx. 1 root root 9 Mar 22 12:17 ip-10.50.17.222:3260-iscsi-iqn.2000-05.com.3pardata:21210002ac002ba0-lun-0 -> ../../sdc

CSIM-EOS07_1611168 cli% showvlun -v dcv-xmp.Hos2T9WbxTHbpm8fKQ Active VLUNs no vluns listed

VLUN Templates no vluns listed CSIM-EOS07_1611168 cli% showvlun -v dcs-oRz8-RXbTUWMECasIbY6PQ Active VLUNs Domain Lun VVName HostName -------Host_WWN/iSCSI_Name-------- Port Type Status ID RT_DOMAIN 0 dcs-oRz8-RXbTUWMECasIbY6PQ manager iqn.1994-05.com.redhat:4fa02e3eee9 1:2:1 matched set active 0 RT_DOMAIN 0 dcs-oRz8-RXbTUWMECasIbY6PQ manager iqn.1994-05.com.redhat:4fa02e3eee9 1:2:2 matched set active 0

        2 total

VLUN Templates Domain Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type RT_DOMAIN 0 dcs-oRz8-RXbTUWMECasIbY6PQ manager ---------------- 1:2:1 matched set RT_DOMAIN 0 dcs-oRz8-RXbTUWMECasIbY6PQ manager ---------------- 1:2:2 matched set

        2 total

CSIM-EOS07_1611168 cli% showsched ------ Schedule ------ SchedName File/Command Min Hour DOM Month DOW CreatedBy Status Alert NextRunTime Schedule-Name createsv -f shashi.@y@@m@@d@@H@@M@@S@ dcv-xmp.Hos2T9WbxTHbpm8fKQ 10 3paradm active Y 2019-03-22 00:10:00 PDT schedule-name createsv -f shashi.@y@@m@@d@@H@@M@@S@ dcv-6HAv3kH6Q-GGtuvsKI7JPw 10 3paradm active Y 2019-03-22 00:10:00 PDT

2 total

CSIM-EOS07_1611168 cli% showrcopy groups

Remote Copy System Information Status: Started, Normal

Group Information

Name Target Domain Status Role Mode Options RCG CSIM-EOS12_1611702 RT_DOMAIN Started Primary Sync LocalVV ID RemoteVV ID SyncStatus LastSyncTime dcv-xmp.Hos2T9WbxTHbpm8fKQ 50975 dcv-xmp.Hos2T9WbxTHbpm8fKQ 2990 Synced NA

CSIM-EOS07_1611168 cli% showvv -cpg RT_SRC_SNAP_CPG -showcols Name,Comment Name Comment
dcv-xmp.Hos2T9WbxTHbpm8fKQ {"volume_id": "c66a7e1e-8b36-4fd5-9bc5-31dba66f1f29", "name": "c66a7e1e-8b36-4fd5-9bc5-31dba66f1f29", "type": "Docker", "display_name": "Test-vol"} dcs-oRz8-RXbTUWMECasIbY6PQ {"volume_name": "Test-vol", "volume_id": "c66a7e1e-8b36-4fd5-9bc5-31dba66f1f29", "display_name": "Snapshot", "description": "snapshot of volume Test-vol"} shashi.190322001001 --
osv--Cm-jXxqTSe0bDW1xt1g5A {"volume_type_name": "3pariscsi_1", "display_name": "v1", "name": "volume-fc29bf8d-7c6a-4d27-b46c-35b5c6dd60e4", "volume_type_id": "ed6b265a-8f7a-4d9b-8491-9814359bb076", "volume_id": "fc29bf8d-7c6a-4d27-b46c-35b5c6dd60e4", "qos": {}, "type": "OpenStack"} oss-3ZA.drcqQrO2jPLlrt36Jg {"volume_id": "fc29bf8d-7c6a-4d27-b46c-35b5c6dd60e4", "display_name": "s1", "description": null, "volume_name": "volume-fc29bf8d-7c6a-4d27-b46c-35b5c6dd60e4"} osv-dQ68H.0rROCWksBYaAIoGA {"snapshot_id": "dd903e76-b72a-42b3-b68c-f2e5aeddfa26", "display_name": "v2", "volume_id": "750ebc1f-ed2b-44e0-9692-c05868022818"}

total

sonawane-shashikant commented 5 years ago

docker.service_log_snapshot.txt log_snapshot.txt

wdurairaj commented 5 years ago

if this is functionality of snapshot of replicated volume is not tested on 3.0, why this is tested now and marked as "high". Can you confirm this functionality on 3.0 and see if this issues recreatable. ?

Right now, I'm marking this as "medium" based on the above comments.

prablr79 commented 5 years ago

it is very much tested in 3.0 .. able to reproduce..

sonawane-shashikant commented 5 years ago

This issue is reproduced on docker plugin 3.0

Test Bed Details: 10.50.9.15 - Single node Replication backend: 10.50.3.7,10.50.3.22.

Performed the following Steps:

  1. Created a Replicated volume.
  2. Created a Snapshot Scheduling to the volume.
  3. Mount the Snapshot.Mount failed with the error message.

Please find the attached files of logs and output.

snapshot error on 3.0.txt

log_10thApril.txt

plugin_logs_10thApril.txt

nilangekarss commented 5 years ago

@sonawane-shashikant not able to reproduce. Working fine for me.

sonawane-shashikant commented 5 years ago

This bug is not reproducible on 3.1 but reproducible on 3.0 Please see below output captured while testing.

BUG 523 on 3.0.txt Bug 523 on 3.1.txt

sonawane-shashikant commented 5 years ago

Closing as it is working fine in 3.1 plugin.