Closed wdurairaj closed 5 years ago
this PR will address this issue - https://github.com/hpe-storage/python-hpedockerplugin/pull/426
This issue is verified as FIXED and can be closed.
Below steps followed to verify the issue.
[docker@cld6b10 ~]$ docker volume ls DRIVER VOLUME NAME [docker@cld6b10 ~]$ [docker@cld6b10 ~]$ [docker@cld6b10 ~]$ vi /etc/hpedockerplugin/hpe.conf [docker@cld6b10 ~]$ [docker@cld6b10 ~]$ [docker@cld6b10 ~]$ docker plugin ls ID NAME DESCRIPTION ENABLED 4a80a1ee515b docker/telemetry:1.0.0.linux-x86_64-stable Docker Inc. metrics exporter false b35ee19b6cb9 hpe:latest HPE Docker Volume Plugin true [docker@cld6b10 ~]$ [docker@cld6b10 ~]$ [docker@cld6b10 ~]$ docker plugin rm hpe Error response from daemon: plugin hpe:latest is enabled [docker@cld6b10 ~]$ docker plugin disable hpe hpe [docker@cld6b10 ~]$ docker plugin rm hpe hpe [docker@cld6b10 ~]$ docker plugin install hpestorage/hpedockervolumeplugin:3.0 --disable --alias hpe Plugin "hpestorage/hpedockervolumeplugin:3.0" is requesting the following privileges:
[docker@cld6b10 ~]$ for i in seq 5
; do docker volume create -d hpe --name volume$i -o backend=3PAR & done
[1] 15761
[2] 15762
[3] 15763
[4] 15764
[5] 15765
[docker@cld6b10 ~]$ volume3
volume1
volume4
volume2
volume5
[1] Done docker volume create -d hpe --name volume$i -o backend=3PAR [2] Done docker volume create -d hpe --name volume$i -o backend=3PAR [3] Done docker volume create -d hpe --name volume$i -o backend=3PAR [4]- Done docker volume create -d hpe --name volume$i -o backend=3PAR [5]+ Done docker volume create -d hpe --name volume$i -o backend=3PAR [docker@cld6b10 ~]$ docker volume ls DRIVER VOLUME NAME hpe:latest volume1 hpe:latest volume2 hpe:latest volume3 hpe:latest volume4 hpe:latest volume5 [docker@cld6b10 ~]$ [docker@cld6b10 ~]$
[docker@cld6b10 ~]$ docker voluem inspect volume1 docker: 'voluem' is not a docker command. See 'docker --help' [docker@cld6b10 ~]$ docker volume inspect volume1 [ { "Driver": "hpe:latest", "Labels": {}, "Mountpoint": "/opt/hpe/data/hpedocker-dm-uuid-mpath-360002ac0000000000000828b00012113", "Name": "volume1", "Options": { "backend": "3PAR" }, "Scope": "global", "Status": { "volume_detail": { "3par_vol_name": "dcv-rEKc58jDS-ir8rQKUEd8Kw", "backend": "3PAR", "compression": null, "cpg": "SHASHI", "domain": "SHASHI", "flash_cache": null, "fsMode": null, "fsOwner": null, "mountConflictDelay": 30, "provisioning": "thin", "size": 100, "snap_cpg": "SHASHI" } } } ] [docker@cld6b10 ~]$ docker volume inspect volume2 [ { "Driver": "hpe:latest", "Labels": {}, "Mountpoint": "/opt/hpe/data/hpedocker-dm-uuid-mpath-360002ac0000000000000828d00012113", "Name": "volume2", "Options": { "backend": "3PAR" }, "Scope": "global", "Status": { "volume_detail": { "3par_vol_name": "dcv-ULL8AhoGRJS-RAMoDJDWtg", "backend": "3PAR", "compression": null, "cpg": "SHASHI", "domain": "SHASHI", "flash_cache": null, "fsMode": null, "fsOwner": null, "mountConflictDelay": 30, "provisioning": "thin", "size": 100, "snap_cpg": "SHASHI" } } } ]
[docker@cld6b10 ~]$ docker plugin disable hpe Error response from daemon: plugin hpe:latest is in use [docker@cld6b10 ~]$ docker plugin disable hpe --force hpe [docker@cld6b10 ~]$ docker plugin enable hpe hpe
[docker@cld6b10 ~]$ docker volume create -d hpe --name SNAP-volume1 -o virtualCopyOf=volume1
SNAP-volume1
[docker@cld6b10 ~]$ docker volume ls
DRIVER VOLUME NAME
hpe:latest SNAP-volume1
hpe:latest volume1
hpe:latest volume2
hpe:latest volume3
hpe:latest volume4
hpe:latest volume5
[docker@cld6b10 ~]$ docker volume create -d hpe --name SNAP-volume1 -o virtualCopyOf=volume2 -o scheduleName=SCHEDULE -o scheduleFrequency="15 " -o snapshotPrefix=ABC -o expHrs=1
SNAP-volume1
[docker@cld6b10 ~]$ docker volume ls
DRIVER VOLUME NAME
hpe:latest SNAP-volume1
hpe:latest volume1
hpe:latest volume2
hpe:latest volume3
hpe:latest volume4
hpe:latest volume5
[docker@cld6b10 ~]$ #docker volume create -d hpe --name SNAP-volume2 -o virtualCopyOf=volume2 -o scheduleName=SCHEDULE -o scheduleFrequency="15 " -o snapshotPrefix=ABC -o expHrs=1
[docker@cld6b10 ~]$ docker volume create -d hpe --name SNAP-volume2 -o virtualCopyOf=volume2 -o scheduleName=SCHEDULE -o scheduleFrequency="15 " -o snapshotPrefix=ABC -o expHrs=1
SNAP-volume2
[docker@cld6b10 ~]$ docker volume ls
DRIVER VOLUME NAME
hpe:latest SNAP-volume1
hpe:latest SNAP-volume2
hpe:latest volume1
hpe:latest volume2
hpe:latest volume3
hpe:latest volume4
hpe:latest volume5
[docker@cld6b10 ~]$ for i in seq 5
; do docker stop MOUNTER$i & done^C
(reverse-i-search)MOUNT': for i in
seq 5; do docker stop ^CUNTER$i & done [docker@cld6b10 ~]$ for i in
seq 5`; do docker run -it -d -v volume$i:/data --volume-driver hpe --name MOUNTER$i --rm busybox /bin/sh & done
[1] 18087
[2] 18088
[3] 18089
[4] 18090
[5] 18091
[docker@cld6b10 ~]$ 4e8ac18bdfe4985329f57eb2e785b4495c9f520dfbd23505a38ba639939b0b13
a2c91e6c2edaf61227bbf265f5b6663c95439af4c554247dadfcb6b8f43eb360
203e08d915786368729efeb7763283767c3afd4991a62c6cb93462b50febab60
e4ff5b2893124d43532b736632a1560d99abb554bb1b984a50baf6b9d5e18c53
e4cc24442e566c12199a37a4a070fd38d383ed8ee0263b2d3b2f5a72b8f71325
[1] Done docker run -it -d -v volume$i:/data --volume-driver hpe --name MOUNTER$i --rm busybox /bin/sh [2] Done docker run -it -d -v volume$i:/data --volume-driver hpe --name MOUNTER$i --rm busybox /bin/sh [3] Done docker run -it -d -v volume$i:/data --volume-driver hpe --name MOUNTER$i --rm busybox /bin/sh [4]- Done docker run -it -d -v volume$i:/data --volume-driver hpe --name MOUNTER$i --rm busybox /bin/sh [5]+ Done docker run -it -d -v volume$i:/data --volume-driver hpe --name MOUNTER$i --rm busybox /bin/sh [docker@cld6b10 ~]$
[root@cld6b10 ~]# tail -f /var/log/messages |grep "from cache" ^C [root@cld6b10 ~]# tail -f /var/log/messages |grep "from cache" Nov 20 17:46:48 cld6b10 dockerd: time="2018-11-20T17:46:48+05:30" level=info msg="2018-11-20 12:16:48.693 14 DEBUG hpedockerplugin.backend_orchestrator [-] Returning the backend details from cache volume1 , 3PAR get_volume_backend_details /python-hpedockerplugin/hpedockerplugin/backend_orchestrator.py:95\x1b[00m" plugin=e6db8104bae1c7b5b3916a7655450035cca1694c1d5ed740f87f2c8addcccd36 Nov 20 17:46:49 cld6b10 dockerd: time="2018-11-20T17:46:49+05:30" level=info msg="2018-11-20 12:16:49.551 14 DEBUG hpedockerplugin.backend_orchestrator [-] Returning the backend details from cache volume5 , 3PAR get_volume_backend_details /python-hpedockerplugin/hpedockerplugin/backend_orchestrator.py:95\x1b[00m" plugin=e6db8104bae1c7b5b3916a7655450035cca1694c1d5ed740f87f2c8addcccd36 Nov 20 17:46:50 cld6b10 dockerd: time="2018-11-20T17:46:50+05:30" level=info msg="2018-11-20 12:16:50.367 14 DEBUG hpedockerplugin.backend_orchestrator [-] Returning the backend details from cache volume1 , 3PAR get_volume_backend_details /python-hpedockerplugin/hpedockerplugin/backend_orchestrator.py:95\x1b[00m" plugin=e6db8104bae1c7b5b3916a7655450035cca1694c1d5ed740f87f2c8addcccd36 Nov 20 17:47:00 cld6b10 dockerd: time="2018-11-20T17:47:00+05:30" level=info msg="2018-11-20 12:17:00.570 14 DEBUG hpedockerplugin.backend_orchestrator [-] Returning the backend details from cache volume3 , 3PAR get_volume_backend_details /python-hpedockerplugin/hpedockerplugin/backend_orchestrator.py:95\x1b[00m" plugin=e6db8104bae1c7b5b3916a7655450035cca1694c1d5ed740f87f2c8addcccd36 Nov 20 17:47:01 cld6b10 dockerd: time="2018-11-20T17:47:01+05:30" level=info msg="2018-11-20 12:17:01.399 14 DEBUG hpedockerplugin.backend_orchestrator [-] Returning the backend details from cache volume5 , 3PAR get_volume_backend_details /python-hpedockerplugin/hpedockerplugin/backend_orchestrator.py:95\x1b[00m" plugin=e6db8104bae1c7b5b3916a7655450035cca1694c1d5ed740f87f2c8addcccd36 Nov 20 17:49:22 cld6b10 dockerd: time="2018-11-20T17:49:22+05:30" level=info msg="2018-11-20 12:19:22.052 14 DEBUG hpedockerplugin.backend_orchestrator [-] Returning the backend details from cache volume1 , 3PAR get_volume_backend_details /python-hpedockerplugin/hpedockerplugin/backend_orchestrator.py:95\x1b[00m" plugin=e6db8104bae1c7b5b3916a7655450035cca1694c1d5ed740f87f2c8addcccd36 Nov 20 17:49:56 cld6b10 dockerd: time="2018-11-20T17:49:56+05:30" level=info msg="2018-11-20 12:19:56.374 14 DEBUG hpedockerplugin.backend_orchestrator [-] Returning the backend details from cache volume2 , 3PAR get_volume_backend_details /python-hpedockerplugin/hpedockerplugin/backend_orchestrator.py:95\x1b[00m" plugin=e6db8104bae1c7b5b3916a7655450035cca1694c1d5ed740f87f2c8addcccd36 Nov 20 17:50:49 cld6b10 dockerd: time="2018-11-20T17:50:49+05:30" level=info msg="2018-11-20 12:20:49.987 14 DEBUG hpedockerplugin.backend_orchestrator [-] Returning the backend details from cache volume1 , 3PAR get_volume_backend_details /python-hpedockerplugin/hpedockerplugin/backend_orchestrator.py:95\x1b[00m" plugin=e6db8104bae1c7b5b3916a7655450035cca1694c1d5ed740f87f2c8addcccd36 Nov 20 17:50:52 cld6b10 dockerd: time="2018-11-20T17:50:52+05:30" level=info msg="2018-11-20 12:20:52.246 14 DEBUG hpedockerplugin.backend_orchestrator [-] Returning the backend details from cache volume2 , 3PAR get_volume_backend_details /python-hpedockerplugin/hpedockerplugin/backend_orchestrator.py:95\x1b[00m" plugin=e6db8104bae1c7b5b3916a7655450035cca1694c1d5ed740f87f2c8addcccd36 Nov 20 17:50:54 cld6b10 dockerd: time="2018-11-20T17:50:54+05:30" level=info msg="2018-11-20 12:20:54.427 14 DEBUG hpedockerplugin.backend_orchestrator [-] Returning the backend details from cache volume4 , 3PAR get_volume_backend_details /python-hpedockerplugin/hpedockerplugin/backend_orchestrator.py:95\x1b[00m" plugin=e6db8104bae1c7b5b3916a7655450035cca1694c1d5ed740f87f2c8addcccd36 Nov 20 17:50:56 cld6b10 dockerd: time="2018-11-20T17:50:56+05:30" level=info msg="2018-11-20 12:20:56.544 14 DEBUG hpedockerplugin.backend_orchestrator [-] Returning the backend details from cache volume3 , 3PAR get_volume_backend_details /python-hpedockerplugin/hpedockerplugin/backend_orchestrator.py:95\x1b[00m" plugin=e6db8104bae1c7b5b3916a7655450035cca1694c1d5ed740f87f2c8addcccd36 Nov 20 17:50:58 cld6b10 dockerd: time="2018-11-20T17:50:58+05:30" level=info msg="2018-11-20 12:20:58.661 14 DEBUG hpedockerplugin.backend_orchestrator [-] Returning the backend details from cache volume5 , 3PAR get_volume_backend_details /python-hpedockerplugin/hpedockerplugin/backend_orchestrator.py:95\x1b[00m" plugin=e6db8104bae1c7b5b3916a7655450035cca1694c1d5ed740f87f2c8addcccd36 Nov 20 17:55:10 cld6b10 dockerd: time="2018-11-20T17:55:10+05:30" level=info msg="2018-11-20 12:25:10.358 21 DEBUG hpedockerplugin.backend_orchestrator [-] Returning the backend details from cache volume1 , 3PAR get_volume_backend_details /python-hpedockerplugin/hpedockerplugin/backend_orchestrator.py:95\x1b[00m" plugin=e6db8104bae1c7b5b3916a7655450035cca1694c1d5ed740f87f2c8addcccd36 Nov 20 18:04:16 cld6b10 dockerd: time="2018-11-20T18:04:16+05:30" level=info msg="2018-11-20 12:34:16.833 21 DEBUG hpedockerplugin.backend_orchestrator [-] Returning the backend details from cache SNAP-volume1 , 3PAR get_volume_backend_details /python-hpedockerplugin/hpedockerplugin/backend_orchestrator.py:95\x1b[00m" plugin=e6db8104bae1c7b5b3916a7655450035cca1694c1d5ed740f87f2c8addcccd36 Nov 20 18:06:17 cld6b10 dockerd: time="2018-11-20T18:06:17+05:30" level=info msg="2018-11-20 12:36:17.401 21 DEBUG hpedockerplugin.backend_orchestrator [-] Returning the backend details from cache volume2 , 3PAR get_volume_backend_details /python-hpedockerplugin/hpedockerplugin/backend_orchestrator.py:95\x1b[00m" plugin=e6db8104bae1c7b5b3916a7655450035cca1694c1d5ed740f87f2c8addcccd36 ^C [root@cld6b10 ~]# tail -f /var/log/messages |grep "from cache" Nov 20 18:10:45 cld6b10 dockerd: time="2018-11-20T18:10:45+05:30" level=info msg="2018-11-20 12:40:45.434 21 DEBUG hpedockerplugin.backend_orchestrator [-] Returning the backend details from cache volume2 , 3PAR get_volume_backend_details /python-hpedockerplugin/hpedockerplugin/backend_orchestrator.py:95\x1b[00m" plugin=e6db8104bae1c7b5b3916a7655450035cca1694c1d5ed740f87f2c8addcccd36 ^C [root@cld6b10 ~]#
Closing this issue based on Shashi's observation.
Currently the volume-> backend cache is built during the volume create and read during other operations on the volume. But this causes a caching to fail , once the plugin restart happens and operations were performed on old volumes (which were created before restart)