vdsm / virtual-dsm

Virtual DSM in a Docker container.
MIT License
2.58k stars 343 forks source link

Is there any chance to directly change Docker VM to KVM? #674

Closed 0Knot closed 3 days ago

0Knot commented 6 months ago

This is an outstanding project, and I'm hugely appreciative of the developers' contributions. I noticed that the environment employs Docker to invoke KVM. I'm wondering if anyone has ever tried to bypass Docker and run the application directly in KVM? For instance, utilizing Virtual-dsm directly in a ProxmoxVE or ESXi VPS.

Why do I propose this? Upgrading the system using arpl equates to a ticking time-bomb.Moreover, arpl isn't very accommodating to VM installations. Perhaps, arpl is more apt for installations on physical machines

kroese commented 6 months ago

This container runs fine on Proxmox and ESXi. It is theoreticly possible to run the scripts outside of a container, but it would become very complicated to install, because it is not just a VM, but there are multiple additional scripts involved (for example, for gracefully shutting down DSM, etc). So the container just acts as a way to bundle all dependencies and have an easy way to start/stop the VM with all those additional scripts executed at the right times.

mndti commented 6 months ago

Excellent job. Congratulations.

I would also like to run directly in a VM. I studied the command and managed to make it work in a VM after docker generated the .img files and virtual disks.

  1. Generate the vdsm via docker with the basic 6GB disk.
  2. Wait for it to install.
  3. Copy the files to the proxmox> ISO DSM_VirtualDSM_69057.boot.img DSM_VirtualDSM_69057.system.img data.img - 6GB virtual disk

Create VM and inside args, put: -nodefaults -boot strict=on -cpu host,kvm=on,l3-cache=on,migratable=no -smp 4 -m 6G -machine type=q35,usb=off,vmport=off,dump-guest-core=off,hpet=off,accel=kvm -enable-kvm -global kvm-pit.lost_tick_policy=discard -object iothread,id=io2 -device virtio-scsi-pci,id=hw-synoboot,iothread=io2,bus=pcie.0,addr=0xa -drive file=/var/lib/vz/template/iso/DSM_VirtualDSM_69057.boot.img,if=none,id=drive-synoboot,format=raw,cache=none,aio=native,discard=on,detect-zeroes=on -device scsi-hd,bus=hw-synoboot.0,channel=0,scsi-id=0,lun=0,drive=drive-synoboot,id=synoboot0,rotation_rate=1,bootindex=1 -device virtio-scsi-pci,id=hw-synosys,iothread=io2,bus=pcie.0,addr=0xb -drive file=/var/lib/vz/template/iso/DSM_VirtualDSM_69057.system.img,if=none,id=drive-synosys,format=raw,cache=none,aio=native,discard=on,detect-zeroes=on -device scsi-hd,bus=hw-synosys.0,channel=0,scsi-id=0,lun=0,drive=drive-synosys,id=synosys0,rotation_rate=1,bootindex=2 -drive file=/var/lib/vz/template/iso/data.img,if=none,id=drive-userdata,format=raw,cache=none,aio=native,discard=on,detect-zeroes=on -device virtio-scsi-pci,id=hw-userdata,iothread=io2,bus=pcie.0,addr=0xc -device scsi-hd,bus=hw-userdata.0,channel=0,scsi-id=0,lun=0,drive=drive-userdata,id=userdata,rotation_rate=1,bootindex=3

After that, the VM will boot as it was generated in docker, and extra disks can be attached that dsm will read automatically.

Note: however, despite loading and even being able to use it, I noticed that the widget takes a long time to load and the information center > General does not load. I believe it has something to do with host.bin?

Any idea?

Sorry for the bad English

mndti commented 6 months ago

Update - tested on proxmox:

I managed to make it work in the VM, maybe it will help someone. It was already working, but without host.bin it was not loading information and it took a while for widgets and system information to appear.

  1. Create a folder: mkdir /mnt/hdd

  2. Run it, yes with 2gb, it will give an error, but it will generate the images we need.

    docker run -it --rm --name dsm \
    -p 5000:5000 --device=/dev/kvm \
    -v /mnt/hdd:/storage\
    -e DISK_SIZE="2G" \
    --cap-add NET_ADMIN \
    --stop-timeout 120\
    vdsm/virtual-dsm
  3. Go to /mnt/hdd and see if you have the two files: DSM_VirtualDSM_69057.boot.img DSM_VirtualDSM_69057.system.img

Note: If you want host.bin, you need to look for the container file find / -name '*host.bin' | copy this file also to the /mnt/hdd folder to copy to proxmox.

  1. Install samba/sftp and copy these files to proxmox, choose the way you think is best, there is a lot of information on the internet.

  2. In the proxmox shell, we will create and place the files in a folder.

  3. mkdir /mnt/vdsm
    cp /path/DSM_VirtualDSM_69057.boot.img /mnt/vdsm/boot.img
    cp /path/DSM_VirtualDSM_69057.system.img /mnt/vdsm/system.img
    cp /path/host.bin /mnt/vdsm/host.bin
  4. Run host.bin /mnt/vdsm/host.bin -cpu=4 -cpu_arch=processor model> /dev/null 2>&1 & This process will die at startup, so create a cron or service to run at system start.

  5. Create a virtual machine Seabios / q35 / VirtIO SCSI or VirtIO SCSI single / Display none / network VirtIO (paravirtualized) / Add serial port 2 / Add VirtIO RNG Note: Do not attach any disks for now. Create a VM Captura de tela 2024-04-04 133211.

  6. Without shell editing: nano /etc/pve/qemu-server/VMID.conf and add the following: args: -serial pty -device virtio-serial-pci,id=virtio-serial0,bus=pcie.0,addr=0x3 -chardev socket,id=charchannel0,host=127.0.0.1,port=12345,reconnect=10 -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=vchannel -object iothread,id=io2 -device virtio-scsi-pci,id=hw-synoboot,iothread=io2,bus=pcie.0,addr=0xa -drive file=/mnt/vdsm/boot.img,if=none,id=drive-synoboot,format=raw,cache=none,aio=native,discard=on,detect-zeroes=on -device scsi-hd,bus=hw-synoboot.0,channel=0,scsi-id=0,lun=0,drive=drive-synoboot,id=synoboot0,rotation_rate=1,bootindex=1 -device virtio-scsi-pci,id=hw-synosys,iothread=io2,bus=pcie.0,addr=0xb -drive file=/mnt/vdsm/system.img,if=none,id=drive-synosys,format=raw,cache=none,aio=native,discard=on,detect-zeroes=on -device scsi-hd,bus=hw-synosys.0,channel=0,scsi-id=0,lun=0,drive=drive-synosys,id=synosys0,rotation_rate=1,bootindex=2

Save the file and start a VM.

  1. Access the IP generated via DHCP on your router and check if everything is OK.
  2. Add the desired disks in the VM UI.

I'm using it here and it works fine, test it at your own risk.

kroese commented 6 months ago

@thiagofperes Impressive work! But I think you are still missing one part: graceful shutdown.. The code in power.sh sends a shutdown signal to vDSM which is absent in your solution. So when you shutdown the VM, it will not exit cleanly.

Deroy2112 commented 6 months ago

@thiagofperes thanks

You can also import the img into Proxmox.

boot.img has scsi slot 9 system.img has scsi slot 10

#Virtual DSM VM ID
VMID=100 
#Virtual DSM Storage Name
VM_STORAGE=local-zfs

qm importdisk $VMID /mnt/vdsm/DSM_VirtualDSM_69057.boot.img $VM_STORAGE
qm importdisk $VMID /mnt/vdsm/DSM_VirtualDSM_69057.system.img $VM_STORAGE

qm set $VMID --scsi9 $VM_STORAGE:vm-$VMID-disk-0,discard=on,cache=none
qm set $VMID --scsi10 $VM_STORAGE:vm-$VMID-disk-1,discard=on,cache=none

qm set $VMID --boot order=scsi9

Additional hard drives continuously scsi11, scsi12, scsi13... etc..

Proxmox vm args

ARGS=$(echo -device virtio-serial-pci,id=virtio-serial0,bus=pcie.0,addr=0x3 -chardev socket,id=charchannel0,host=127.0.0.1,port=12345,reconnect=10 -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=vchannel)
qm set $VMID --args "$ARGS"

aio can be set as native or _iouring via gui.

graceful shutdown works via hookscript.

Just an example:

note hookscript or snippets must be included as content proxmox gui -> datacenter -> storage

hookscript dir /var/lib/vz/vdsm.sh

#!/bin/bash
set -e -o errexit -o pipefail -o nounset

vmId="$1"
runPhase="$2"

case "$runPhase" in
    pre-start)
        /mnt/vdsm/host.bin -cpu=2 -cpu_arch=processor -mac=00:00:00:00:00 -hostsn=HostSN -guestsn=GuestSN -addr=0.0.0.0:12345 -api=:2210 &>/dev/null &
    ;;

    post-start)
      ;;

    pre-stop)
        url="http://127.0.0.1:2210/read?command=6&timeout=50"
        curl -sk -m "$(( 50+2 ))" -S "$url"
      ;;
    post-stop)
        hID=$(pgrep host.bin)
        kill $hID
      ;;
    *)
      echo "Unknown run phase \"$runPhase\"!"
      ;;
esac
echo "Finished $runPhase on VM=$vmId"

Assign hookscript to the vm

STORAGE:snippets/vdsm.sh

qm set $VMID --hookscript "local:snippets/vdsm.sh"

then host.bin is executed at startup and stopped as soon as the VM is shut down/stopped.

mndti commented 6 months ago

@kroese

Thank you very much for your project. Your docker project works perfectly, and I tried using the same, but I faced some problems.

Machine: N5100 12GB RAM 256gb nvme hdd 1tb i225 4lan

In addition to these problems, exposing services for automatic discovery is quite complicated. DLNA for example.

In the tests I did with the VM, I got an uptime of 23h and I only stopped to make the changes that @Deroy2112 posted.

I will continue testing.

The VM really wasn't 100%, but for my use it wouldn't be a problem to turn it off within the vdsm UI. I tested @Deroy2112's solution and it was excellent.

@Deroy2112 thank you very much, it was excellent. I racked my brains trying to boot and use the proxmox UI, but the problem was that the boot.img and system.img had to be scsi9/scsi10. Furthermore, the next disks must be above (scsi11,scsi12,scsi13).

Your vdsm.sh script too, show. I'm new to proxmox. I made your changes and it's working great. I just don't understand why even specifying the processor in host.bin it is not showing in the dsm panel. But it's just a detail.

For anyone testing, it is important to use VirtIO SCSI single.

With this solution I can now use multiple networks and discovery of samba, dlna and other services works well.

mndti commented 6 months ago

Do you know how I can pass the gpu of the intel n5100? I tried -display egl-headless,rendernode=/dev/dri/renderD128 -vga virtio

But unfortunately it doesn't appear in vdsm.

r0bb10 commented 6 months ago

ive-synoboot,id=synoboot0,rotation_rate=1,booti

works almost perfectly but not 100%, args have to be passed in a better way and as far as i tested upgrade wont work, for me at least it sais corrupted.

mndti commented 6 months ago

@r0bb10

Check previous posts where @Deroy2112 posted a better solution. Topic node has all the information you need.

What exactly is the problem with the update? Here it is working normally, I updated it and had no problems.

UPDATE

Regarding the processor, here is how it must be specified in host.bin to appear on the panel. /mnt/vdsm/host.bin -cpu=4 -cpu_arch="Intel Celeron N5100,," -mac=00:00:00:00:00 -hostsn=HostSN -guestsn=GuestSN -addr=0.0.0.0:12345 -api=:2210 &>/dev/null &

image

r0bb10 commented 6 months ago

@r0bb10

Check previous posts where @Deroy2112 posted a better solution. Topic node has all the information you need.

What exactly is the problem with the update? Here it is working normally, I updated it and had no problems.

finally managed to make it work, the boot.img file had problems so i recreated it and now it runs fine.

system.img on scsi10 is not technically needed to be imported, a new disk in proxmox with at least 12Gb can be attached and dsm will install itself in it fresh.

every other disk added (scsi11, scsi12) can be hot added while the vm is running and dsm will format them in btrfs without asking permission, did not test to pass an entire disk directly but should work.. virtualdsm has no support for raid so every disk attached will be added as volumeX automagically as single disk, so best way is to aggregate them in proxmox (mdadm?) and pass the disk without any filesystem to the vm and handle the rest in vdsm as single volume with btrfs and snapshots.

r0bb10 commented 4 months ago

little update, i can confirm it works really really good in a native vm,

to make it work perfectly the snippet has to be adjusted since parts are missing, here the complete line:

/var/lib/vz/vdisks/host.bin -cpu=4 -cpu_arch="Intel(R) Core(TM) i3-N305,," -mac=XX:XX:XX:XX:XX:XX -model=DS718+ -hostsn=XXXXXXXXXXXXX -guestsn=XXXXXXXXXXXX -addr=0.0.0.0:12345 -api=:2210 &>/dev/null &

image

BobWs commented 4 months ago

Anyone tried this VM solution on a real Synology using VMM (Virtual Machine Manager)? Synology offers 1 free license for vdsm to use on a NAS, but with this experiment one could install multiply vdsm's on a single Synology NAS.

BobWs commented 4 months ago

where can I find the find / -name '*host.bin' file?

kroese commented 4 months ago

@BobWs https://github.com/qemus/qemu-host/releases/download/v2.05/qemu-host.bin

BobWs commented 4 months ago

I have manage to install it on Synology's VMM without the *host.bin file and it is working. Haven't find any problems yet exept that the widgets and system info take a while to load. Also qemu isn't working, and can't shotdown the vm via VMM, but within DSM I can shutdown and restart just fine. I also updated the DSM via DSM update menu and it worked as of installing DSM packages.

BobWs commented 4 months ago

@BobWs https://github.com/qemus/qemu-host/releases/download/v2.05/qemu-host.bin

Thanks! Do you know how to run this in Synology's VMM? Where do I put this bin file?

r0bb10 commented 4 months ago

I have manage to install it on Synology's VMM without the *host.bin file and it is working. Haven't find any problems yet exept that the widgets and system info take a while to load. Also qemu isn't working, and can't shotdown the vm via VMM, but within DSM I can shutdown and restart just fine. I also updated the DSM via DSM update menu and it worked as of installing DSM packages.

yes It works, but no safe shutdown and you cannot pass serial and model, so everything related to synology (quickconnect, ame codecs, ddns) wont work, thats why systeminfo does not load as It should.

host.bin has to run on the original synology not in the vmm, it's something like a bridge, this is why in proxmox it runs on the host (as a snippet or as a permanent service) and not in the vm.

BobWs commented 4 months ago

host.bin has to run on the original synology not in the vmm, it's something like a bridge, this is why in proxmox it runs on the host (as a snippet or as a permanent service) and not in the vm.

Okay thanks for explaining! I will take a look how to run host.bin on the host (original Synology)...

kroese commented 4 months ago

@BobWs Why are you not just running the container in Container Manager instead of using Virtual Machine Manager? Just wondering...

mndti commented 4 months ago

Anyone tried this VM solution on a real Synology using VMM (Virtual Machine Manager)? Synology offers 1 free license for vdsm to use on a NAS, but with this experiment one could install multiply vdsm's on a single Synology NAS.

I didn't quite understand your objective.

The diskstation manager already supports multiple installations of vdsm, however, as mentioned, there is only one license included in VMM. If you want to run other instances, you need to purchase separate licenses.

I don't know what the purpose is, if for testing, ok. But I've been doing a lot of recent testing on diskstation manager/vdsm through virtual machine and bare metal and unfortunately, you will always run into some limitation.

VMM does not give you the option of advanced or extra arguments/configurations. To do this you would have to run qemu manually through terminal/ssh.

I recommend/suggest you try Container Manager/Docker.

https://www.synology.com/en-uk/dsm/feature/docker

BobWs commented 4 months ago

@BobWs Why are you not just running the container in Container Manager instead of using Virtual Machine Manager? Just wondering...

Because of the network limitations of docker. I'm using VDSM as a VPN Gateway for my LAN network, and that isn't posible to do with docker vdsm...tried it but it didn't work. With the VM solution I can do that just fine.

kroese commented 4 months ago

@BobWs Okay, now it makes sense. Normally you can also do that with Docker, only not on a Synology, because its kernel does not support the macvtap interface you need to set DHCP=Y. So under these circumstances I understand your decision.

ShuTing-Chaing commented 1 month ago

Hello,How can i make 2 vdsm machine in same host,i make everthing copy but can't pass synology account login. i think is somthing wrong with args. Can anyone help me look? thanks. vdsm1: /var/lib/pve/local-btrfs/snippets/host.bin -cpu=4 -cpu_arch="Intel(R) CC150,," -mac= -model=DS918+ -hostsn= -guestsn= -addr=0.0.0.0:12345 -api=:2210 &>/dev/null & ARGS=$(echo -device virtio-serial-pci,id=virtio-serial0,bus=pcie.0,addr=0x3 -chardev socket,id=charchannel0,host=127.0.0.1,port=12345,reconnect=10 -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=vchannel) vdsm2: /var/lib/pve/local-btrfs/snippets/host2.bin -cpu=4 -cpu_arch="Intel(R) CC150,," -mac= -model=DS918+ -hostsn= -guestsn= -addr=0.0.0.0:12346 -api=:2211 &>/dev/null & ARGS=$(echo -device virtio-serial-pci,id=virtio-serial0,bus=pcie.0,addr=0x3 -chardev socket,id=charchannel0,host=127.0.0.1,port=12346,reconnect=10 -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=vchannel)