Closed herbingk closed 6 months ago
Looks like I'll have to setup an NVMe read cache for 1 of my volumes to see what the script is doing wrong.
Just did a quick check and confirm, by disabling the M.2 SSD cache, the warning disappears.
Ok. I've done some testing and I get the warning if the scriptpath variable is empty or contains a volume# that does not exist.
If you try this test script what does it return? https://github.com/007revad/Synology_enable_Deduplication/blob/test/script_on_ssd.sh
When you run syno_enable_dedupe.sh what does the 4th or 5th line show?
It should show the path and filename of the script, like: Running from: /volume1/scripts/syno_enable_dedupe.sh
When you run syno_enable_dedupe.sh what does the 4th or 5th line show?
It should show the path and filename of the script, like:
Running from: /volume1/scripts/syno_enable_dedupe.sh
Running from: /volume1/admin/Synology_enable_Deduplication/syno_enable_dedupe.sh
Ok. I've done some testing and I get the warning if the scriptpath variable is empty or contains a volume# that does not exist.
If you try this test script what does it return? https://github.com/007revad/Synology_enable_Deduplication/blob/test/script_on_ssd.sh
/volume1/scripts/test.sh scriptvol: volume1 vg: vg1 md: md3 md2 WARNING Don't store this script on an NVMe volume!
Strange. It doesn't do that for me.
I currently have 4 volumes:
And a SATA SSD as a read cache for volume 5.
/volume1/scripts/test.sh
scriptvol: volume1
vg: vg1
md: md3
md2
/volume3/scripts/test.sh
scriptvol: volume3
vg: vg3
md: md4
WARNING Don't store this script on an NVMe volume!
/volume4/scripts/test.sh
scriptvol: volume4
vg: vg4
md: md5
WARNING Don't store this script on an NVMe volume!
/volume5/scripts/test.sh
scriptvol: volume5
vg: vg5
md: md6
I just noticed that you and I are somehow getting an extra md2 on the line after "md: md3" for volume1.
I just ran that test script on my DS720+ which only has 1 HDD volume and I didn't get the warning or the extra md2.
/volume1/scripts/test.sh
scriptvol: volume1
vg: vg1
md: md2
I just ran that test script on my DS720+ which only has 1 HDD volume and I didn't get the warning or the extra md2.
/volume1/scripts/test.sh scriptvol: volume1 vg: vg1 md: md2
md3 is my M.2 SSD cache md2 is my RAID5 SATA SSD volume where the script is stored at
cat /proc/mdstat returns:
md3 : active raid1 nvme1n1p1[1] nvme0n1p1[0] 1000196800 blocks super 1.2 [2/2] [UU]
md2 : active raid5 sata1p3[0] sata3p3[2] sata4p3[3] sata5p3[4] sata6p3[5] sata2p3[1] 37453707840 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
md1 : active raid1 sata1p2[0] sata4p2[5] sata5p2[4] sata6p2[3] sata3p2[2] sata2p2[1] 2097088 blocks [6/6] [UUUUUU]
md0 : active raid1 sata1p1[0] sata4p1[5] sata5p1[4] sata6p1[3] sata3p1[2] sata2p1[1] 8388544 blocks [6/6] [UUUUUU]
Your seeing the same as me. md3 is the SSD cache. md2 is the HDD array.
On my DS1821+ I get this:
# lvdisplay | grep /volume_1 | cut -d"/" -f3
vg1
And this:
# pvdisplay | grep -B 1 vg1 | grep /dev/ | cut -d"/" -f3
md3
md2
Digging a little deeper I see this:
# pvdisplay | grep -B 1 vg1
PV Name /dev/md3
VG Name shared_cache_vg1
--
PV Name /dev/md2
VG Name vg1
I believe I've found the solution. Can you try this test script to confirm it works correctly for you. https://github.com/007revad/Synology_enable_Deduplication/blob/test/script_on_ssd.sh
I believe I've found the solution. Can you try this test script to confirm it works correctly for you. https://github.com/007revad/Synology_enable_Deduplication/blob/test/script_on_ssd.sh
Sure, returns no more warning:
/volume1/scripts/test.sh scriptvol: volume1 vg: vg1 md: md2
Fixed in https://github.com/007revad/Synology_enable_Deduplication/releases/tag/v1.2.17
Thank you.
It was a pleasure to me. I have to thank you for creating, maintaining and sharing these valuable scripts with us. Very much appreciated!
Script reports to be run from M,2 when using M2. SSD cache on volume. Same occurs for Synology_HDD_DB script as well.
cat /proc/mdstat command returns the following on my DS1621+
Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1] md3 : active raid1 nvme1n1p1[1] nvme0n1p1[0] 1000196800 blocks super 1.2 [2/2] [UU]
md2 : active raid5 sata1p3[0] sata3p3[2] sata4p3[3] sata5p3[4] sata6p3[5] sata2p3[1] 37453707840 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
md1 : active raid1 sata1p2[0] sata4p2[5] sata5p2[4] sata6p2[3] sata3p2[2] sata2p2[1] 2097088 blocks [6/6] [UUUUUU]
md0 : active raid1 sata1p1[0] sata4p1[5] sata5p1[4] sata6p1[3] sata3p1[2] sata2p1[1] 8388544 blocks [6/6] [UUUUUU]
unused devices:
Thanks for looking into it.