007revad / Synology_M2_volume

Easily create an M.2 volume on Synology NAS
MIT License
810 stars 56 forks source link

RAID5 without official M.2 NVMe adapter card #78

Closed hawie closed 1 year ago

hawie commented 1 year ago
          > Unfortunately if DSM isn't seeing the PCIe card with synonvme the drive won't show up in storage manager.

That card is very cheap. I might buy one myself to see if I can get it working in my DS1821+

Is it possible to enable it? create RAID5 without official M.2 NVMe adapter card.

_Originally posted by @hawie in https://github.com/007revad/Synology_M2_volume/issues/76#issuecomment-1657420785_

hawie commented 1 year ago
  1. basic infomation

cmd:

udevadm info /dev/nvme0n1 | head -n 1
udevadm info /dev/nvme1n1 | head -n 1
udevadm info /dev/nvme2n1 | head -n 1
udevadm info /dev/nvme3n1 | head -n 1
cat /etc.defaults/extensionPorts
cat /etc/extensionPorts
synonvme --m2-card-model-get /dev/nvme0n1
synonvme --m2-card-model-get /dev/nvme1n1
synonvme --m2-card-model-get /dev/nvme2n1
synonvme --m2-card-model-get /dev/nvme3n1
cat /run/synostorage/disks/nvme0n1/m2_pool_support 
cat /run/synostorage/disks/nvme1n1/m2_pool_support 
cat /run/synostorage/disks/nvme2n1/m2_pool_support 
cat /run/synostorage/disks/nvme3n1/m2_pool_support

screen:

ash-4.4# udevadm info /dev/nvme0n1 | head -n 1
P: /devices/pci0000:00/0000:00:1c.0/0000:01:00.0/nvme/nvme0/nvme0n1
ash-4.4# udevadm info /dev/nvme1n1 | head -n 1
P: /devices/pci0000:00/0000:00:1c.1/0000:02:00.0/nvme/nvme1/nvme1n1
ash-4.4# udevadm info /dev/nvme2n1 | head -n 1
P: /devices/pci0000:00/0000:00:1c.2/0000:03:00.0/nvme/nvme2/nvme2n1
ash-4.4# udevadm info /dev/nvme3n1 | head -n 1
P: /devices/pci0000:00/0000:00:1c.3/0000:04:00.0/nvme/nvme3/nvme3n1
ash-4.4# cat /etc.defaults/extensionPorts
[pci]
pci1="0000:00:1c.0"
pci2="0000:00:1c.1"
pci3="0000:00:1c.2"
pci4="0000:00:1c.3"
ash-4.4# cat /etc/extensionPorts
[pci]
pci1="0000:00:1c.0"
pci2="0000:00:1c.1"
pci3="0000:00:1c.2"
pci4="0000:00:1c.3"
ash-4.4# synonvme --m2-card-model-get /dev/nvme0n1
Not M.2 adapter card
ash-4.4# synonvme --m2-card-model-get /dev/nvme1n1
Not M.2 adapter card
ash-4.4# synonvme --m2-card-model-get /dev/nvme2n1
Not M.2 adapter card
ash-4.4# synonvme --m2-card-model-get /dev/nvme3n1
Not M.2 adapter card
ash-4.4# cat /run/synostorage/disks/nvme0n1/m2_pool_support 
0ash-4.4# cat /run/synostorage/disks/nvme1n1/m2_pool_support 
0ash-4.4# cat /run/synostorage/disks/nvme2n1/m2_pool_support 
0ash-4.4# cat /run/synostorage/disks/nvme3n1/m2_pool_support
0ash-4.4#
hawie commented 1 year ago
  1. try RDM Map 4 nvme disks to 4 SATA disks in PVE, and use Synology Storage Manager to create storage pools and storage volumes. Extract RAID5 information, generate RAID5 configuration file and modify sda in it to nvme related information. then reboot, change to nvme passthrough, and
    /sbin/mdadm --assemble /dev/md2 --scan  --no-degraded --config=/root/mdadm.conf
    /sbin/vgchange -ay /dev/vg1
    mount /dev/mapper/vg1-volume_1 /volume1/

    RAID5 configuration file /root/mdadm.conf

    ARRAY /dev/md2 level=raid5 num-devices=4 metadata=1.2 name=N100:2 UUID=82adf9da:a918ef39:96ca8d1e:b60467e4
    /dev/nvme0n1p5,/dev/nvme1n1p5,/dev/nvme2n1p5,/dev/nvme3n1p5

    mount | grep volume /dev/mapper/vg1-volume_1 on /volume1 type btrfs (rw,relatime,space_cache=v2,metadata_ratio=50,block_group_cache_tree,syno_allocator,subvolid=256,subvol=/@syno)

hawie commented 1 year ago
  1. info
    
    ash-4.4# pvdisplay
    --- Physical volume ---
    PV Name               /dev/md2
    VG Name               vg1
    PV Size               5.56 TiB / not usable 960.00 KiB
    Allocatable           yes 
    PE Size               4.00 MiB
    Total PE              1457280
    Free PE               125
    Allocated PE          1457155
    PV UUID               204rri-YEqb-r2Q8-lp9r-k1i6-zyuL-QLGzjp

ash-4.4# vgdisplay --- Volume group --- VG Name vg1 System ID
Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 5.56 TiB PE Size 4.00 MiB Total PE 1457280 Alloc PE / Size 1457155 / 5.56 TiB Free PE / Size 125 / 500.00 MiB VG UUID xyIfCl-qMV4-Hb2V-IpUM-8xe0-IMYg-tye9uL



volume1 can be accessed normally through ssh.
hawie commented 1 year ago
screen1 screen2 screen3 screen4
  1. Problem How can I create what the system thinks is a storage pool?

volume1 can be accessed normally through ssh, but no valid storage pool appears. The Online Assemble of the default Storage Manager GUI cannot succeed because it cannot detect all four NVME disks.

hawie commented 1 year ago

Looking forward to God @007revad can solve this problem without using the official adapter. Only in this way can any number of NVME disks be added.

007revad commented 1 year ago

I'm surprised that you got as far as you did.

it is possible to create a RAID 5 storage pool using 2 internal NVMe drives and 2 NVMe drives in a Synology M.2 PCIe card.

I do have some questions about your setup:

  1. What NVMe PCIe card do you have?
  2. Do you not have a volume 1 on HDDs?
  3. Are M.2 Drive 1-1 and M.2 Drive 2-1 in the PCIe card?
  4. What happens if you create the volume on vg1 via SSH?
hawie commented 1 year ago
raid5-ok

It worked. I tried repeatedly according to your script. After a certain reboot, Volume1 appeared and I didn't even click Online Assemble. I don't know how to make it like this again. no NVMe PCIe card, simulated in a virtual machine before.

007revad commented 1 year ago

Your the 3rd person who has had to run the script multiple times before it worked. But the other 2 had to run the script multiple times after a DSM update to get their NVMe volume back. I have no idea why... but I'd love to figure it out.

007revad commented 1 year ago

How did you simulated 4 NVMe drives in a virtual machine?

Was it DSM virtual machine or XPEnology in a virtual machine?

hawie commented 1 year ago

Proxmox Virtual Environment, with PCIe passthrough and XPEnology.

jdpdata commented 11 months ago

sorry to bring up closed thread, but just wanted to let you know that I'm able to create Raid0 volume with 4x - 2TB nvme on Asus HyperX M.2 card. Bifurcation x4x4x4x4 on Lenovo P520 WorkStation. Will continue test to see if it is stable.

test

jdpdata commented 11 months ago

now with Healthy Volume 2 :)

test

007revad commented 11 months ago

@jdpdata

I'm about to upload an updated version of the script that supports up to 32 NVMe drives :o) It also supports RAID 6 and RAID 10.

jdpdata commented 11 months ago

@007revad Sweet! I want to try Raid10. I'll test it out for you.

jdpdata commented 11 months ago

So, I'm wanting to mounting iSCSI share of this super fast Raid0 volume, but getting only 1200 MB/s R/W on my Windows machine. Both machines are on 10GbE Any ideas how to get faster R/W speed?

007revad commented 11 months ago

@007revad Sweet! I want to try Raid10. I'll test it out for you.

https://github.com/007revad/Synology_M2_volume/blob/develop/syno_create_m2_volume.sh

Can you reply back with shell output? I'd like to check that it's not outputting anything strange.

007revad commented 11 months ago

So, I'm wanting to mounting iSCSI share of this super fast Raid0 volume, but getting only 1200 MB/s R/W on my Windows machine. Both machines are on 10GbE Any ideas how to get faster R/W speed?

1200 MB/s is impressive. 1250 MB/s is the theoretical maximum for 10GbE.

iSCSI Multipathing with 2 physical 10GbE ports on both machines, or a single 25GbE port on each machine should get you double the speed.

jdpdata commented 11 months ago

oh man, I'm out of available port on my 10G switch. May need to dismantle one of my NAS to steal the dual fiber channels to test. Testing your new scripts now with Raid10...

jdpdata commented 11 months ago

image

jdpdata commented 11 months ago

Not working with Raid10. Can't select any of my nvme drives. You want me to try Raid6?

jdpdata commented 11 months ago

same issue with Raid6

image

jdpdata commented 11 months ago

Do I need to erase my drives first?

jdpdata commented 11 months ago

erased my drives. I still can't select them

image

007revad commented 11 months ago

I've made a change to the script. Can you try it again.

https://github.com/007revad/Synology_M2_volume/blob/develop/syno_create_m2_volume.sh

And reply with a screenshot.

jdpdata commented 11 months ago

I swapped fiber modules with another NAS. Looks like I have to rebuild the ARC loader to accept the new nic card. Give me a few moments...

jdpdata commented 11 months ago

image

jdpdata commented 11 months ago

image

jdpdata commented 11 months ago

creating array is going to take a very long time

007revad commented 11 months ago

creating array is going to take a very long time

I'm running it now to create RAID 1 with two 500GB NMVe and it looks like the resync will take about 35 minutes. I imagine with four 4TB drives it could take 9 hours!

I'm going to add a timer that shows how long the resync took. And get rid of the "Done" option when there's no drives left to select.

jdpdata commented 11 months ago

It's 16% done so far. I'll let it finish. Will report back in the morning.

007revad commented 11 months ago

If it's up to 16% it will only take 2 hours.

jdpdata commented 11 months ago

Ok, 31.9% now. Probably good time to take a break. I've been at this all day since 10AM! Almost 12 hrs already. I'll run some benchmarks tomorrow.

jdpdata commented 11 months ago

Thank you btw for the awesome scripts! I wanted to stay with Xpenology. Was very tempted to go to the dark side with TrueNas Scale. It supports nvme raid out-of-the-box no problems, but I know nothing about managing Truenas.

jdpdata commented 11 months ago

yay! It's done. image

007revad commented 11 months ago

Nice. Only 110 minutes. Thanks for testing the script.

jdpdata commented 11 months ago

raid10 is up

image

jdpdata commented 11 months ago

some benchmarks. CrystalDisk on iSCSi mounted disk. Was expected r/w hit with RAID10, but none at all. Still maxing out my 10GbE. I think this is a keeper!!

image

jdpdata commented 11 months ago

Fully saturated 10GbE on SMB transfers as well

image