vdsm / virtual-dsm

Virtual DSM in a Docker container.
MIT License
2.55k stars 339 forks source link

Split Disk Option? #289

Closed godismyjudge95 closed 1 year ago

godismyjudge95 commented 1 year ago

First, thanks for the awesome docker setup, works like charm.

I was wondering if there might be a way to split the disk or use multiple disk images? I currently have multiple drives setup with my system and would like some redundancy if I am to use this DSM docker setup but I don't see a way around that unless I can somehow split the root image into multiple parts (spread across drives) or use multiple images mounted into DSM which would then RAID them together?

I have previously set up a multiple image setup with xpenology and RAID inside DSM but this would be way simpler to maintain and update if there was some way to do it through your docker image.

I have read through your other replies about using iSSCI or NFS to mount the external drives inside DSM, but I would really like to take advantage of DSM's ability to RAID and provide some redundancy and speed improvement if I am able to place multiple images across drives.

Architecturally I could see a system where in a user mounts multiple volumes to /storage /storage1 /storage2 etc. and the assumption is that each of these storage mounts are on different drives. Then during the image generation step the script can detect which folders are available and create an image in each one with the specified size. Finally, it would mount all of these images as separate drives onto the DSM VM.

Let me know what you think :)

kroese commented 1 year ago

This container doesn't provide this options because if you want redundancy its much easier to do it at the host-level.

For example, you create the RAID array in Linux using two disks, and use it as the location for your virtual disk image. That way you can have full RAID redundancy for this container (or any other container).

I think this is a much better method, because you will have a fully standard RAID (instead of us creating a hacky solution) which will work with all the standard tools, etc.

godismyjudge95 commented 1 year ago

This container doesn't provide this options because if you want redundancy its much easier to do it at the host-level.

For example, you create the RAID array in Linux using two disks, and use it as the location for your virtual disk image. That way you can have full RAID redundancy for this container (or any other container).

I think this is a much better method, because you will have a fully standard RAID (instead of us creating a hacky solution) which will work with all the standard tools, etc.

I guess that is a fair point. I mainly wanted the option because I am currently using unRAID with their standard setup (ie. non-ZFS/RAID parity) - I wanted to keep open the possibility of having different sized drives. Being able to create multiple virtual disk images would allow for this and help me in my procrastination of setting up ZFS ;)

I am not positive, but I believe having multiple images RAIDed inside of Synology could help corruption of the images themselves?


Maybe an alternate way would be to somehow mount a folder through docker that would then be passed through to the DSM VM? This way all the users files could be stored outside the OS image and the files could be stored in any manner outside of docker.

It looks like qemu supports folder mounting? - https://wiki.qemu.org/Documentation/9psetup

-virtfs local,path=/host/path/to/share,mount_tag=host0,security_model=mapped,id=host0

I might experiment with this myself to see if it's a viable option.


At the end of the day, I am just worried about this single image corrupting and it having all my images (using Synology Photos) and as a result having to restore a multi-TB file from backup.

kroese commented 1 year ago

Yes it is be possible to have multiple disk images inside Virtual DSM, but even if I add support for that, your problem will not be solved. Because Virtual DSM will not see them as "physical" drives, so will not provide you any option to form a RAID between them. You can try it yourself in Virtual Machine Manager on your Synology: attach two disk images and boot up VirtualDSM to see if it will allow you to use them as RAID.

About the folder mounting, this was discussed in previous issues like https://github.com/kroese/virtual-dsm/issues/12 and https://github.com/kroese/virtual-dsm/issues/219 . The short conclusion is that the kernel of VirtualDSM lacks the kernel modules / drivers to allow for folder pass-through.

godismyjudge95 commented 1 year ago

Ah that stinks about the folder mounting as that would be the most ideal situation. I did get DSM RAID working with multiple virtual disks in the unRAID VM system (also uses qemu).

If you are wondering why I am looking to switch it's due to the tinycore redpill approach being fragile with updates. This docker approach would also be easier to manage.

Here is the relevant portion of my working VM config in case that helps:

  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/cache/domains/xpenology/tinycore-redpill.v0.4.6.img' index='6'/>
      <backingStore/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <alias name='sata0-0-2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/cache/domains/xpenology/vcache0x0.img' index='5'/>
      <backingStore/>
      <target dev='hdd' bus='sata'/>
      <alias name='sata0-0-3'/>
      <address type='drive' controller='0' bus='0' target='0' unit='3'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/cache/domains/xpenology/vcache0x1.img' index='4'/>
      <backingStore/>
      <target dev='hde' bus='sata'/>
      <alias name='sata0-0-4'/>
      <address type='drive' controller='0' bus='0' target='0' unit='4'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/disk1/domains/xpenology/vdisk1x0.img' index='3'/>
      <backingStore/>
      <target dev='hdf' bus='sata'/>
      <alias name='sata1-0-2'/>
      <address type='drive' controller='1' bus='0' target='0' unit='2'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/disk2/domains/xpenology/vdisk2x0.img' index='2'/>
      <backingStore/>
      <target dev='hdg' bus='sata'/>
      <alias name='sata1-0-0'/>
      <address type='drive' controller='1' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/disk3/domains/xpenology/vdisk3x0.img' index='1'/>
      <backingStore/>
      <target dev='hdh' bus='sata'/>
      <alias name='sata1-0-1'/>
      <address type='drive' controller='1' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='sata0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='sata' index='1'>
      <alias name='sata1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </controller>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <alias name='usb'/>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <alias name='usb'/>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <alias name='usb'/>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <interface type='bridge'>
      <mac address='00:11:32:af:ba:56'/>
      <source bridge='br0'/>
      <target dev='vnet0'/>
      <model type='virtio-net'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/0'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/0'>
      <source path='/dev/pts/0'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-1-xpenology/org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'>
      <alias name='input1'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input2'/>
    </input>
    <graphics type='vnc' port='5900' autoport='yes' websocket='5701' listen='0.0.0.0' keymap='en-us'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <audio id='1' type='none'/>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
kroese commented 1 year ago

That you got it working before is because you used a modified version of the normal DSM, but this project runs an unmodified (stock) version of VirtualDSM. And VirtualDSM does not support RAID or many other features the normal DSM has.

So I'm afraid your only option is to do RAID on the host-side, and not inside the container.

godismyjudge95 commented 1 year ago

Ah that makes sense. Oh well. Maybe since this is a common enough request, this explanation could be put in the FAQ?

Thanks for your responses :)

kroese commented 1 year ago

I added support for multiple disks in the latest version. To use it just bind the directories /storage2 or /storage3 in the compose file, and it will mount this extra disk in DSM. You can set their sizes via DISK2_SIZE and DISK3_SIZE.

godismyjudge95 commented 10 months ago

I added support for multiple disks in the latest version. To use it just bind the directories /storage2 or /storage3 in the compose file, and it will mount this extra disk in DSM. You can set their sizes via DISK2_SIZE and DISK3_SIZE.

Just now seeing this notification 😂, this is awesome thanks for the new feature!