007revad / Synology_HDD_db

Add your HDD, SSD and NVMe drives to your Synology's compatible drive database and a lot more
MIT License
2.39k stars 164 forks source link

DS1821+ with 2x NVME`s internal and 2x NVME`s on E10M20-T1, no show in Storage Manager after script #148

Closed RozzNL closed 9 months ago

RozzNL commented 11 months ago

Hi all, I have a DS1821+ and am running DSM 7.2-64570 Update 3. Installed 2 Samsung NVMEs in the hardware slots on the Syno, after running the script syno_hdd_db.sh from [u/daveR007](https://www.reddit.com/u/daveR007/) the ssds show up and i can use them as cache. Ran that a couple of years. Just recently i found the E10M20-T1 card and installed with 2 more NVME`s, ran the script again:

root@DS1821:/volume1/homes/admin/Scripts# ./syno_hdd_db.sh -nfr

Synology_HDD_db v3.1.64
DS1821+ DSM 7.2-64570-3
Using options: -nfr
Running from: /volume1/homes/admin/Scripts/syno_hdd_db.sh

HDD/SSD models found: 2
ST14000NM001G-2KJ103,SN03
ST16000NM001G-2KK103,SN03

M.2 drive models found: 2
Samsung SSD 970 EVO 1TB,2B2QEXE7
Samsung SSD 970 EVO Plus 2TB,2B2QEXM7

M.2 PCIe card models found: 1
E10M20-T1

No Expansion Units found

ST14000NM001G-2KJ103 already exists in ds1821+_host_v7.db
ST16000NM001G-2KK103 already exists in ds1821+_host_v7.db
Samsung SSD 970 EVO 1TB already exists in ds1821+_host_v7.db
Samsung SSD 970 EVO 1TB already exists in ds1821+_e10m20-t1_v7.db
Samsung SSD 970 EVO Plus 2TB already exists in ds1821+_host_v7.db
Samsung SSD 970 EVO Plus 2TB already exists in ds1821+_e10m20-t1_v7.db

E10M20-T1 NIC already enabled for DS1821+
E10M20-T1 NVMe already enabled for DS1821+
E10M20-T1 SATA already enabled for DS1821+
E10M20-T1 already enabled in model.dtb

Disabled support disk compatibility.
Disabled support memory compatibility.
Max memory already set to 64 GB.

M.2 volume support already enabled.
Disabled drive db auto updates.
DSM successfully checked disk compatibility.
You may need to reboot the Synology to see the changes.

So it sees the 4 ssd`s but does not show them in the syno gui, i ran the syno_create_m2_volume.sh and created 2 raid1 volumes, 1x raid1 on the onboard slots and 1x raid 1 on the E10M20-T1 card.

But still they do not show up in the gui, also no online assemble option.

Answer from private chat with Dave: This is caused by DSM 7.2 Update 3 adding a power_limit for NVMe drives

007revad commented 11 months ago

I need to get some information from you. Can you reply with what the following commands return:

synodisk --enum -t cache

cat /sys/block/nvme0n1/device/syno_block_info

cat /sys/block/nvme1n1/device/syno_block_info

cat /sys/block/nvme2n1/device/syno_block_info

cat /sys/block/nvme3n1/device/syno_block_info

007revad commented 11 months ago

And 2 more:

for d in /sys/devices/pci0000:00/0000:00:01.2/0000:* ; do echo "$d"; done

Assuming the last line of that command ended in 0000:07:00.0 then run this command:

for d in /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:* ; do echo "$d"; done

RozzNL commented 11 months ago

I did use the create m2 volume script again and created 4x single volumes, hope this does not mess up your needed information.

synodisk --enum -t cache No info returned

cat /sys/block/nvme0n1/device/syno_block_info pciepath=00:01.2,00.0,04.0,00.0

cat /sys/block/nvme1n1/device/syno_block_info pciepath=00:01.2,00.0,08.0,00.0

cat /sys/block/nvme2n1/device/syno_block_info pciepath=00:01.3,00.0

cat /sys/block/nvme3n1/device/syno_block_info pciepath=00:01.4,00.0

for d in /sys/devices/pci0000:00/0000:00:01.2/0000:* ; do echo "$d"; done /sys/devices/pci0000:00/0000:00:01.2/0000:00:01.2:pcie01 /sys/devices/pci0000:00/0000:00:01.2/0000:00:01.2:pcie02 /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0 Yes indeed returned your assumed info

for d in /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:* ; do echo "$d"; done /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:07:00.0:pcie12 /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:00.0 /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:02.0 /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:03.0 /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0 /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0 /sys/devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:0c.0

007revad commented 11 months ago

I have enough to create a model.dtb file for your DS1821+ to make the E10M20-T1 and it's NVMe drives appear in storage manager.

But the result of the last command is a little confusing. Though it doesn't matter for what we're doing.

  1. 0000:08:00.0
  2. 0000:08:02.0 is one of the M.2 slots in the E10M20-T1 for a SATA M.2 drive.
  3. 0000:08:03.0 is one of the M.2 slots in the E10M20-T1 for a SATA M.2 drive.
  4. 0000:08:04.0 is the M.2 slot 2 in the E10M20-T1 for an NVMe drive.
  5. 0000:08:08.0 is the M.2 slot 1 in the E10M20-T1 for an NVMe drive.
  6. 0000:08:0c.0

I don't know what 0000:08:00.0 and 0000:08:0c.0 are for. One of them could be for the 10G in the E10M20-T1.

RozzNL commented 11 months ago

Great! I don`t mind testing some more for you if you need the info for the future?

007revad commented 11 months ago

Can you download this zip file: ds1821+_model_with_e10m20-t1.zip

Then

  1. Unzip it to a directory on the DS1821+
  2. cd to that directory.
  3. chmod 644 model.dtb
  4. cp -p /etc.defaults/model.dtb /etc.defaults/model.dtb.bak
  5. cp -pu model.dtb /etc.defaults/model.dtb
  6. cp -pu model.dtb /etc/model.dtb
  7. Reboot
  8. Check storage manager now shows the E10M20-T1 and it's NVMe drives.
007revad commented 11 months ago

I don`t mind testing some more for you if you need the info for the future?

I will take you up on that.

RozzNL commented 11 months ago

Nope, nothing changed in Storage Manager

007revad commented 11 months ago

That's disappointing and unexpected.

It's 9pm here and it's been a busy day. I'll get back to you tomorrow.

RozzNL commented 11 months ago

No probs Dave, thanks for so far.

zcpnate commented 11 months ago

This appears to be the same as my open issue #132. Reverting to 7.2u1 does consistently fix it but am now stuck on that dsm version.

007revad commented 11 months ago

EDIT Don't worry about these commands. See my later comment here.

What do the following commands return:

grep "e10m20-t1" /run/model.dtb

grep "power_limit" /run/model.dtb

grep "100,100,100,100" /run/model.dtb

get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_nvme DS1821+

get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+

007revad commented 11 months ago

@zcpnate @cfsnate

Yes, this is the same problem. I was going to reply to issue #132 once @RozzNL had confirmed the fix is working.

007revad commented 11 months ago

@RozzNL @zcpnate @cfsnate

I just realised why the edited model.dtb file didn't work. If you have syno_hdd_db scheduled to run run at start-up or shutdown it will replace the edited model.dtb file with the one for 7.2 update 1... which is not what we want.

The syno_hdd_db.sh that I have scheduled to run at boot-up has the check_modeldtb "$c" lines commented out. For the E10M20-T1 you want to change line 1335 from check_modeldtb "$c" to #check_modeldtb "$c"

After editing syno_hdd_db.sh redo the steps in this comment.

RozzNL commented 11 months ago

Will try that later today Dave

RozzNL commented 11 months ago

Just for the sake of testing, did your commands before i edited the script:

What do the following commands return:

grep "e10m20-t1" /run/model.dtb Binary file /run/model.dtb matches grep "power_limit" /run/model.dtb Binary file /run/model.dtb matches grep "100,100,100,100" /run/model.dtb Binary file /run/model.dtb matches get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_nvme DS1821+ yes get_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+ yes

@RozzNL @zcpnate @cfsnate

I just realised why the edited model.dtb file didn't work. If you have syno_hdd_db scheduled to run run at start-up or shutdown it will replace the edited model.dtb file with the one for 7.2 update 1... which is not what we want.

The syno_hdd_db.sh that I have scheduled to run at boot-up has the check_modeldtb "$c" lines commented out. For the E10M20-T1 you want to change line 1335 from check_modeldtb "$c" to #check_modeldtb "$c"

After editing syno_hdd_db.sh redo the steps in this comment.

Commented out, reapplied the modeldtb and applicable commands, rebooted. The modified script with commented out check is run at shutdown, after boot up, still no drives in Storage Manager. 👎

EDIT: I doublechecked that i use the modified model.dtb file you gave me, dates+size are the same as your modified file.

EDIT2: I do run the syno_hdd_db.sh with the -nfr option btw

007revad commented 11 months ago

Try disabling the schedules for syno_hdd_db and leaving it disabled, then run this command set_section_key_value /usr/syno/etc.defaults/adapter_cards.conf E10M20-T1_sup_sata DS1821+ no

007revad commented 11 months ago

Can you tell me what these commands return:

synodisk --enum -t cache

udevadm info --query path --name nvme0

udevadm info --query path --name nvme1

udevadm info --query path --name nvme2

udevadm info --query path --name nvme3

RozzNL commented 11 months ago

Disabled schedule, ran command, rebooted, nothing changed in Storage Manager.

Can you tell me what these commands return:

synodisk --enum -t cache Nothing returned udevadm info --query path --name nvme0 /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:04.0/0000:0c:00.0/nvme/nvme0 udevadm info --query path --name nvme1 /devices/pci0000:00/0000:00:01.2/0000:07:00.0/0000:08:08.0/0000:0d:00.0/nvme/nvme1 udevadm info --query path --name nvme2 /devices/pci0000:00/0000:00:01.3/0000:0f:00.0/nvme/nvme2 udevadm info --query path --name nvme3 /devices/pci0000:00/0000:00:01.4/0000:10:00.0/nvme/nvme3

EDIT: Looking at your command, and looking in the file "adapter_cards.conf" i see: [E10M20-T1_sup_nic] and [E10M20-T1_sup_nvme] and [E10M20-T1_sup_sata] and DS1821+=yes, but also lower in the list DS1821+=no

There are multiple references for the same DS.....not only for the DS1821+ but also other DS`s

007revad commented 11 months ago

I don't understand why synodisk --enum -t cache is not returning anything.

Are there any nvme erros if you run: sudo grep synostgd-disk /var/log/messages | tail -10

RozzNL commented 11 months ago

I don't understand why synodisk --enum -t cache is not returning anything.

Are there any nvme erros if you run: sudo grep synostgd-disk /var/log/messages | tail -10

2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme2n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme2n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme3n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme3n1

EDIT: But running the (modified 1335 line) script ./syno_hdd_db.sh -nfr Synology_HDD_db v3.1.64 DS1821+ DSM 7.2-64570-3 Using options: -nfr Running from: /volume1/homes/admin/Scripts/syno_hdd_db.sh

HDD/SSD models found: 2 ST14000NM001G-2KJ103,SN03 ST16000NM001G-2KK103,SN03

M.2 drive models found: 2 Samsung SSD 970 EVO 1TB,2B2QEXE7 Samsung SSD 970 EVO Plus 2TB,2B2QEXM7

M.2 PCIe card models found: 1 E10M20-T1

No Expansion Units found

ST14000NM001G-2KJ103 already exists in ds1821+_host_v7.db ST16000NM001G-2KK103 already exists in ds1821+_host_v7.db Samsung SSD 970 EVO 1TB already exists in ds1821+_host_v7.db Samsung SSD 970 EVO 1TB already exists in ds1821+_e10m20-t1_v7.db Samsung SSD 970 EVO Plus 2TB already exists in ds1821+_host_v7.db Samsung SSD 970 EVO Plus 2TB already exists in ds1821+_e10m20-t1_v7.db

E10M20-T1 NIC already enabled for DS1821+ E10M20-T1 NVMe already enabled for DS1821+ E10M20-T1 SATA already enabled for DS1821+

Disabled support disk compatibility. Disabled support memory compatibility. Max memory already set to 64 GB. M.2 volume support already enabled. Disabled drive db auto updates. DSM successfully checked disk compatibility. You may need to reboot the Synology to see the changes.

007revad commented 11 months ago

Synology uses the same adapter_cards.conf on every Synology NAS model (even models without a PCIe slot). It lists which PCIe adapter cards each model supports.

Can you try deleting the line that says "DS1821+=no"

I also just noticed that every model that officially supports the E10M20-T1 is listed as yes in the [E10M20-T1_sup_sata] section. Even though Synology's information says the E10M20-T1 does not support SATA M.2 drives on any NAS model.

The Xpenology people just add the NAS model = yes under every section in adapter_cards.conf

007revad commented 11 months ago

Incorrect power limit number 4!=2

Okay, so it's not happy with the "100,100,100,100" power limit I added. Which is what the Xpenology people use. They add an extra 100 for each NVMe drive found.

What does the following command return? /sys/firmware/devicetree/base/power_limit && echo

The only Synology models I own that have M.2 slots have:

I tried "14.85,9.075,14.85,9.075" but it didn't work and I was getting errors in the log:

Changing the power limit to "100,100,100,100" worked perfectly on my DS1821+ with 2 NVMe drives in a M2D18 adapter card, and none in the internal M.2 slots.

Before I changed the power limit to "100,100,100,100" the fans in my DS1821+ were going full speed and my NVMe drives and the M2D18 adapter card did not show in storage manager.

zcpnate commented 11 months ago

I can assist with my working setup on 7.2u1 if there's some way to check existing working power limits?

007revad commented 11 months ago

I can assist with my working setup on 7.2u1 if there's some way to check existing working power limits?

7.2u1 didn't have a power limit. Synology added the power limit in 7.2u2

RozzNL commented 11 months ago

Ah...de [ ] are seperate sections, gottcha EDIT: All 3 sections regarding E10M20-T1 are already set to yes for the DS1821+ and i can`t find the DS1821+=no anymore....wonder if running your script changed this?

Incorrect power limit number 4!=2

Okay, so it's not happy with the "100,100,100,100" power limit I added. Which is what the Xpenology people use. They add an extra 100 for each NVMe drive found.

What does the following command return?

cat /sys/firmware/devicetree/base/power_limit && echo 14.85,9.075

The only Synology models I own that have M.2 slots have:

  • DS720+ has a "11.55,5.775" power limit.
  • DS1821+ has a "14.85,9.075" power limit.

I tried "14.85,9.075,14.85,9.075" but it didn't work and I was getting errors in the log:

  • nvme_model_spec_get.c:81 Fail to get fdt property of power_limit
  • nvme_model_spec_get.c:359 Fail to get power limit of nvme0n1

Changing the power limit to "100,100,100,100" worked perfectly on my DS1821+ with 2 NVMe drives in a M2D18 adapter card, and none in the internal M.2 slots.

Before I changed the power limit to "100,100,100,100" the fans in my DS1821+ were going full speed and my NVMe drives and the M2D18 adapter card did not show in storage manager.

007revad commented 11 months ago

@zcpnate

Can you check if smartctl --info /dev/nvme0 works for NVMe drives in 7.2u1

007revad commented 11 months ago

All 3 sections regarding E10M20-T1 are already set to yes for the DS1821+ and i can`t find the DS1821+=no anymore....wonder if running your script changed this?

Yes, running syno_hdd_db would have set it back to yes. But I don't think it matters.

zcpnate commented 11 months ago

@zcpnate

Can you check if smartctl --info /dev/nvme0 works for NVMe drives in 7.2u1

ash-4.4# smartctl --info /dev/nvme0
smartctl 6.5 (build date Sep 26 2022) [x86_64-linux-4.4.302+] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

Read NVMe Identify Controller failed: NVMe Status 0x400b
007revad commented 11 months ago

Read NVMe Identify Controller failed: NVMe Status 0x400b

On 7.2u3 I get Read NVMe Identify Controller failed: NVMe Status 0x4002

Someone else on 7.2.1 gets Read NVMe Identify Controller failed: NVMe Status 0x200b

The only thing that's consistent is that smartctl --info for nvme drives doesn't work in DSM 7.2

zcpnate commented 11 months ago

I tested a few other nvme drives and got 200b for my internally mounted nvme drives acting as a volume.

RozzNL commented 11 months ago

I too get the 0x200b

007revad commented 11 months ago

Can you try:

synodiskport -cache

synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1

synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1

synonvme --m2-card-model-get /dev/nvme2n1;synonvme --get-location /dev/nvme2n1`

synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1

zcpnate commented 11 months ago

Can you try:

synodiskport -cache

ash-4.4# synodiskport -cache nvme0n1 nvme1n1 nvme2n1 nvme3n1

synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1

ash-4.4# synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1 E10M20-T1 Device: /dev/nvme0n1, PCI Slot: 1, Card Slot: 2

synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1

ash-4.4# synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1 E10M20-T1 Device: /dev/nvme1n1, PCI Slot: 1, Card Slot: 1

synonvme --m2-card-model-get /dev/nvme2n1;synonvme --get-location /dev/nvme2n1`

ash-4.4# synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme2n1 Not M.2 adapter card Device: /dev/nvme2n1, PCI Slot: 0, Card Slot: 1

synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme3n1

ash-4.4# synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme3n1 Not M.2 adapter card Device: /dev/nvme3n1, PCI Slot: 0, Card Slot: 2

007revad commented 11 months ago

I had a typo in the last command. It should return the same result, but the command should have been: synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1

zcpnate commented 11 months ago

I had a typo in the last command. It should return the same result, but the command should have been: synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1

ash-4.4# synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1 Not M.2 adapter card Device: /dev/nvme3n1, PCI Slot: 0, Card Slot: 2

blind copy paste haha didn't catch that

007revad commented 11 months ago

While searching for what causes the "nvme_model_spec_get.c:90 Incorrect power limit number 4!=2" log entry I found 7.2-U3 has 2 scripts related to nvme power. I need to check if 7.2.1 still has those scripts.

syno_nvme_power_limit_set.service runs /usr/syno/lib/systemd/scripts/syno_nvme_set_power_limit.sh

/usr/syno/lib/systemd/scripts/syno_nvme_set_power_limit.sh then runs /usr/syno/lib/systemd/scripts/nvme_power_state.sh -d $dev_name -p $pwr_limit which sets the power limit to $pwr_limit for nvme drive $dev_name

It can also list the power states of the specified nvme drive. Strangely both my DS720+ and DS1821+ return the exact same power states even though both have different power_limits set in model.dtb

For me /usr/syno/lib/systemd/scripts/nvme_power_state.sh --list -d nvme0 returns:

========== list all power states of nvme0 ==========
ps 0:   max_power 4.70W operational enlat:0 exlat:0 rrt:0 rrl:0 rwt:0 rwl:0 idle_power:0.3000W active_power:4.02 W      operational     rrt 0   rrl 0   rwt 0  rwl 0
ps 1:   max_power 3.00W operational enlat:0 exlat:0 rrt:0 rrl:0 rwt:0 rwl:0 idle_power:0.3000W active_power:3.02 W      operational     rrt 0   rrl 0   rwt 0  rwl 0
ps 2:   max_power 2.20W operational enlat:0 exlat:0 rrt:0 rrl:0 rwt:0 rwl:0 idle_power:0.3000W active_power:2.02 W      operational     rrt 0   rrl 0   rwt 0  rwl 0
ps 3:   max_power 0.0150W non-operational enlat:1500 exlat:2500 rrt:3 rrl:3 rwt:3 rwl:3 idle_power:0.0150 W     non-operational rrt 3   rrl 3   rwt 3   rwl 3
ps 4:   max_power 0.0050W non-operational enlat:10000 exlat:6000 rrt:4 rrl:4 rwt:4 rwl:4 idle_power:0.0050 W    non-operational rrt 4   rrl 4   rwt 4   rwl 4
ps 5:   max_power 0.0033W non-operational enlat:176000 exlat:25000 rrt:5 rrl:5 rwt:5 rwl:5 idle_power:0.0033 W  non-operational rrt 5   rrl 5   rwt 5   rwl 5

========== nvme0 result ==========
ps 0:   max_power 4.70W operational enlat:0 exlat:0 rrt:0 rrl:0 rwt:0 rwl:0 idle_power:0.3000W active_power:4.02 W      operational     rrt 0   rrl 0   rwt 0  rwl 0

add to task schedule? false
RozzNL commented 11 months ago

/usr/syno/lib/systemd/scripts/nvme_power_state.sh --list -d nvme0 For me it returns:

========== list all power states of nvme0 ==========
ps 0:   max_power 7.50 W        operational     rrt 0   rrl 0   rwt 0   rwl 0
ps 1:   max_power 5.90 W        operational     rrt 1   rrl 1   rwt 1   rwl 1
ps 2:   max_power 3.60 W        operational     rrt 2   rrl 2   rwt 2   rwl 2
ps 3:   max_power 0.0700 W      non-operational rrt 3   rrl 3   rwt 3   rwl 3
ps 4:   max_power 0.0050 W      non-operational rrt 4   rrl 4   rwt 4   rwl 4

========== nvme0 result ==========
ps 0:   max_power 7.50 W        operational     rrt 0   rrl 0   rwt 0   rwl 0

add to task schedule? false
007revad commented 11 months ago

Yours looks more like I'd expect the output of a Synology command or script to look like.

Does this return an error? Or a list of nvme drives and power limits?

nvme_list=$(synodiskport -cache)
output=$(/usr/syno/bin/synonvme --get-power-limit $nvme_list)
echo ${output[@]}
RozzNL commented 11 months ago

Nope, i doesn`t return anything...

007revad commented 11 months ago

So what about these:

nvme_list=$(synodiskport -cache) && echo ${nvme_list[@]}

output=$(synonvme --get-power-limit $nvme_list) && echo ${output[@]}

synonvme --get-power-limit nvme0n1

synonvme --get-power-limit nvme1n1

synonvme --get-power-limit nvme2n1

synonvme --get-power-limit nvme3n1

RozzNL commented 11 months ago

All return with nothing 👎

007revad commented 11 months ago

Does synodiskport -cache

return: nvme0n1 nvme1n1 nvme2n1 nvme3n1

RozzNL commented 11 months ago

nope, still returns nothing....and i still have the same errors btw 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme2n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme2n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:90 Incorrect power limit number 4!=2 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_model_spec_get.c:164 Fail to get power limit of nvme3n1 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_slot_info_get.c:37 Failed to get model specification 2023-10-01T18:10:30+02:00 DS1821 synostgd-disk[14047]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme3n1

zcpnate commented 11 months ago

FYI these power limit scripts do not exist on 7.2u1

007revad commented 11 months ago

I just installed a 3rd NVMe drive in one of the internal M.2 slots to see if I got 4!=2 in logs but I didn't.

@RozzNL Can you do the following:

  1. Edit line 1334 in syno_hdd_db.sh to change this:
    • enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"
    • to this:
    • #enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"
  2. Delete the DS1821+=yes line under the [E10M20-T1_sup_sata] section in /usr/syno/etc.defaults/adapter_cards.conf

Also, have you ever run the syno_enable_m2_volume script? In DSM 7.2 update 3 or an earlier DSM version? I have NOT run it since updating to DSM 7.2 update 3.

007revad commented 11 months ago

If anyone wants a quick solution (instead of waiting for more trial and error testing) you can replace /usr/lib/libsynonvme.so.1 with the one from DSM 7.2-64570. I know this works in 7.2 update 2 and update 3. But I have no idea if it works in 7.2.1

  1. Download DS1821+_64570_libsynonvme.so.1.zip and unzip it.
  2. Backup existing libsynonvme.so.1 and append build and update version:
    • build=$(get_key_value /etc.defaults/VERSION buildnumber)
    • nano=$(get_key_value /etc.defaults/VERSION nano)
    • cp -p /usr/lib/libsynonvme.so.1 /usr/lib/libsynonvme.so.1.${build}-${nano}.bak
  3. cd to the folder where you unzipped the downloaded libsynonvme.so.1
  4. mv -f libsynonvme.so.1 /usr/lib/libsynonvme.so.1 && chmod a+r /usr/lib/libsynonvme.so.1
RozzNL commented 11 months ago

I just installed a 3rd NVMe drive in one of the internal M.2 slots to see if I got 4!=2 in logs but I didn't.

@RozzNL Can you do the following:

  1. Edit line 1334 in syno_hdd_db.sh to change this:

    • enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"
    • to this:
    • #enable_card "$m2cardconf" E10M20-T1_sup_sata "E10M20-T1 SATA"
  2. Delete the DS1821+=yes line under the [E10M20-T1_sup_sata] section in /usr/syno/etc.defaults/adapter_cards.conf

Also, have you ever run the syno_enable_m2_volume script? In DSM 7.2 update 3 or an earlier DSM version? I have NOT run it since updating to DSM 7.2 update 3.

Goodmorning all, Performed the comment-out, removed line DS1821+=yes, rebooted, no change.

I have indeed performed the enable_m2_volume script, so i restored this back with running the script again, rebooted but i could not get back into the gui, had to reboot 2x again. after succesful reboot, still no change.

Checked the comment-out and line were still removed (just to be sure the m2_volume script had not interfered) and i did forget to run the hdd_db script after i edited it, duh...so reran all again to check, still no change

EDIT: Checking some of the commands you sent previously. synonvme --m2-card-model-get /dev/nvme3n1; synonvme --get-location /dev/nvme3n1 Not M.2 adapter card Can't get the location of /dev/nvme3n1 synonvme --m2-card-model-get /dev/nvme2n1; synonvme --get-location /dev/nvme2n1 Not M.2 adapter card Can't get the location of /dev/nvme2n1 synonvme --m2-card-model-get /dev/nvme0n1; synonvme --get-location /dev/nvme0n1 E10M20-T1 Can't get the location of /dev/nvme0n1 synonvme --m2-card-model-get /dev/nvme1n1; synonvme --get-location /dev/nvme1n1 E10M20-T1 Can't get the location of /dev/nvme1n1

007revad commented 11 months ago

I'm curious if the issues @RozzNL is having are the same for everyone.

@zcpnate what does synodisk --enum -t cache return for you?

Are you willing to try 7.2 update 3 again, but this time:

  1. Disable any scheduled scripts first (and leave them disabled).
  2. Update to 7.2 update 3.
  3. Check if synodisk --enum -t cache returns something.
  4. Download the following test script and model.dtb file and put them both in the same directory.
  5. Run the script and check storage manager.
  6. Reboot and check storage manager.

download_button

zcpnate commented 11 months ago

Can get you this info tmw. I'd be willing to upgrade to u3 for testing as I'm pretty sure I can reliably downgrade to u1 in the event of no success. Also totally willing to jump on a zoom and we can debug in real time.