Closed govint closed 7 years ago
multi-write VMDKs are used for cluster file systems only and require in-VM cluster software that synchronize access to essential shared block device. There is no use case for enabling this until somebody may decide to put Oracle RAC or CFS into a container without admin access to do it statically. It is super dangerous and pretty much guarantees data corruption if multiwrite is allowed and disk attached / regular filesystem is mounted on 2 VMs.
Reopening,Multi-writer disks and their use today is for clustered apps. And how clustered apps evolve for containers isn't fixed today. We can keep this open and how shared disks are supported can be identified later. But for sure we will have containers sharing disks across VMs and then across hosts. VIC is a great example of VMs running on different hosts and attaching the same volumes (each VM is a container).
I won't approach this from a legacy mindset to disallow these features in a container context.
Govindan - clustered block devices are not very interesting anymore. They are still used for witness (e.g. RAC or MSCS), but they are not used for actual data, at least not in "mutli-writer mode". None of the local file systems will obviously work in this mode without corrupting data. The trend is to use "local" (single computer write) block devices and coordinate access via cluster components like etcd. In fact, modern distributed file systems are all built this way. And the witness is already supported via share (SMB3), witness objects (VSAN), or persistent reservations (iSCSI 3+, supported by RAC and MSCS and I suspect every other clustered storage solution out there) so multi-write is not necessary for them either.
I doubt we'll ever do this without hearing from customers first - and if we hear from customers, we will have to enter details anyways, so I do not see much value in keeping this one around.
However - if you feel strongly about keeping it opened, fine with me.
At the very least, the readme/wiki should make it clear that a volume can only be mounted on a single VM at a time.
Hi. What if I would like to create a volume, accessible on all nodes across swarm cluster? So that only one container would be attached. Swarm manager would decide on what node to run container.
That would be possible, a volume created on shared storage accessible on all nodes. One container can attach that volume from any host in the cluster.
I created volume on shared storage, but it is visible only for node on which i created volume. Every node can create different volumes on shared storage with same name e.g. Vol1@datastor.
Can you list the steps to create the volume and the results of "docker volume ls" on both the creating and non-creating nodes.
Can you also give the output of "/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py ls" on both the nodes.
node-1: docker volume create --driver=vmdk --name=Test@store-1 -o size=10Gb -o diskformat=thin
node-1: vmdk Test@store-1
host-1:
Volume Datastore Created By VM Created Attached Policy Capacity Used Disk Format
Test store-1 node-1
@bk1te can you also show the disk config from the two hosts and show the details of store-1 on both hosts.
On Wed, Feb 15, 2017 at 3:55 PM, bk1te notifications@github.com wrote:
node-1: docker volume create --driver=vmdk --name=Test@store-1 -o size=10Gb -o diskformat=thin node-1: vmdk Test@store-1 host-1: Volume Datastore Created By VM Created Attached Policy Capacity Used Disk Format Test store-1 node-1 detached N/A 10GB 145Mb thin ext4 read-write independent_persist node-2: empty host-2: empty Sorry can`t copy paste directly
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/vmware/docker-volume-vsphere/issues/193#issuecomment-279973040, or mute the thread https://github.com/notifications/unsubscribe-auth/APHseAyWVaC85uu5Tvt0KO8Nd132wtXfks5rctKHgaJpZM4H21T_ .
@govint can you link me what commands to execute
@bk1te I assumestore-1
accessible to ESX Hosts where VMs are running. Have you installed on VIB on all ESX Hosts where Docker VMs are running?
Is node-1 & node-2 VM on same datastore?
Can you paste output of following command from node-1 and node-2 vm;
docker volume create --driver=vmdk --name=Test@badDS
We are also available on Slack;
https://vmwarecode.slack.com/messages/docker-volume-vsphere/
@bk1te, please use "esxcfg-scsidevs -m" and "esxcfg-scsidevs -l" on both the ESX hosts to which the volume has been shared. Pls. post the output from both commands on both hosts.
Running command "docker volume create --driver=vmdk --name=Test@badDS" on both VM returned:
Error response from daemon: create Test@badDS: VolumeDriver.Create: Server returned an error: TypeError('can only join an iterable',)
@bk1te, is this a different problem, in which case suggest making a new issue. In fact two issues perhaps , one for the earlier issue where volumes aren't visible on two nodes and this one.
it is not an issue badDS doesnot exist
esxcfg-scsidevs -m on both esx hosts returned:
naa.
esxcfg-scsidevs -l on both esx hosts returned:
naa.
@bk1te write
it is not an issue badDS does not exist
It is a bug somewhere in our vmdk_ops service - this command is supposed to print something like "badDS datastore is not found; available datastores are : ".
There seems to be something special about about your datastores list that exposes the bug. Would it be possible to list the datastores ? ls -l /vmfs/volumes
?
@msterin on "docker volume create --driver=vmdk --name=Test@badDS" Unhandled Exception: Traceback (most recent call last): File "/usr/lib/vmware/vmdkops/bin/vmdk_ops.py", line 1373, in execRequestThread opts=opts) File "/usr/lib/vmware/vmdkops/bin/vmdk_ops.py", line 746, in executeRequest %(datastore, ", ".join(get_datastore_names_list), vm_datastore)) TypeError: can only join an iterable
@bk1te
Unhandled Exception: Traceback (most recent call last): File "/usr/lib/vmware/vmdkops/bin/vmdk_ops.py", line 1373, in execRequestThread opts=opts) File "/usr/lib/vmware/vmdkops/bin/vmdk_ops.py", line 746, in executeRequest %(datastore, ", ".join(get_datastore_names_list), vm_datastore)) TypeError: can only join an iterable
It is being tracked at #817.
@bk1te : can you please try following what @msterin is asking?
Would it be possible to list the datastores ? ls -l /vmfs/volumes ?
I`ll try. There 15 directories like \<uuid> and 11 symlinks to directories When I try to list \<uuid> directories which does not have symlinks I got ls: \<uuid> : No such file or directory On both esx hosts
@bk1te, can you also confirm that both VMs are on the same datastore. And can you upload /var/log/vmware/vmdk_ops.log from both hosts.
esx1 esx2 both have access to shared datastore-0 on which I would like to create shareble disk vm1 is on esx1 datastore-2 vm2 is on esx2 datastore-1
@govint I can upload /var/log/vmware/vmdk_ops.log from both hosts but later
Here is requested info esx-1-ls-vmfs-volumes.txt [Uploading esx-1-vmdk_ops.txt…]() esx-2-docker-volume-ls.txt esx-2-vmdk_ops.txt esx-1-ls-vmfs-volumes-msa-2312i-docker-store-0.txt esx-2-ls-vmfs-volumes.txt esx-1-docker-volume-ls.txt esx-2-ls-vmfs-volumes-msa-2312i-docker-store-0.txt
@bk1te Thanks for sharing logs. This is a regression. We are working on emergency patch. Stay tuned.
@bk1te Please give a shot with an emergency path for the reported issue. You may find at https://github.com/vmware/docker-volume-vsphere/releases/tag/0.11.1
Please feel free to reach us out, if you have any concern.
Thanks!
Support a create time option to allow a volume to be attached by more than one VM. Multi-writer vmdks can be shared across VMs and needed if we expect to support that workload.