RROrg / rr

Redpill Recovery (arpl-i18n)
GNU General Public License v3.0
4.57k stars 781 forks source link

版本 23.9.0 14盘位,每次重启后会丢失后3个盘位,然后需要进磁盘管理修复 #199

Closed jiang123574 closed 1 year ago

jiang123574 commented 1 year ago

初步怀疑是由于引导过快,导致磁盘还没加载好就引导好了,导致磁盘在引导好之后才加载 甚至在进入系统后还能看到硬盘加载进来 使用环境为,前6硬盘是在主板上的,后8硬盘连在直通卡上

wjz304 commented 1 year ago

@snailium 不过你倒是提醒了我,或许可以通过屏蔽驱动达到预期的效果

但是 ahci 驱动无法屏蔽

临时冒出的一个想法而已

snailium commented 1 year ago

好像并不是DSM看到更多,而是ARPL和DSM里面看到的控制器顺序不一样,DSM明显是先SATA控制器再SAS HBA,ARPL可能是先挂栽SAS然后才是SATA?

snailium commented 1 year ago

要不,计算maxdisks的时候,按照最大数量算?6xSATA + 8xSAS = 14,再怎么样也错不了

wjz304 commented 1 year ago

不用纠结这些,所以我没有在arpl下做 *portcfg 的设置。

wjz304 commented 1 year ago

要不,计算maxdisks的时候,按照最大数量算?6xSATA + 8xSAS = 14,再怎么样也错不了

maxdisks 应该等于 sd* 最大的角标,即 如果存在 sdz,maxdisks应该为 26, 不关心你实际有几个盘,即使只有sdz

snailium commented 1 year ago

要不,计算maxdisks的时候,按照最大数量算?6xSATA + 8xSAS = 14,再怎么样也错不了

maxdisks 应该等于 sd* 最大的角标,即 如果存在 sdz,maxdisks应该为 26

这么算是不对的,目前我的SAS HBA上只挂载了5块硬盘,如果按最大脚标计算,maxdisks=11,但是我可以在系统启动之后再插入3块硬盘,后插入的这三块就已经多于maxdisks了,系统不会识别。如果只考虑到sdz的情况,是否可以尝试将maxdisks设置成常量26,不用计算了

wjz304 commented 1 year ago

是的,可以,

我上面也说了 Addons 23.9.3 会自动设置这些,每次开机都会根据当前硬盘去自动设置这些,而且当存在 HBA时,是会自动设置为26的

wjz304 commented 1 year ago

https://github.com/wjz304/arpl-i18n/issues/199#issuecomment-1718497290 这不应该是 Addons 23.9.3 的结果

jiang123574 commented 1 year ago

为什么旧版本的引导不会有这样的问题呢,是不是考虑对3622这种原生支持hba卡的设备做出一些区别

wjz304 commented 1 year ago

为什么旧版本的引导不会有这样的问题呢,是不是考虑对3622这种原生支持hba卡的设备做出一些区别

你指的旧版是哪个版本? 那不是还有 usb识别为sata,sata识别为usb的问题,还有前些天 Q3 说的 sn的问题。 然而其实我哪个版本也没问题啊 X09KW1%Z{ 8FKU~3TPVB9XT

jiang123574 commented 1 year ago

为什么旧版本的引导不会有这样的问题呢,是不是考虑对3622这种原生支持hba卡的设备做出一些区别

你指的旧版是哪个版本? 那不是还有 usb识别为sata,sata识别为usb的问题,还有前些天 Q3 说的 sn的问题。 然而其实我哪个版本也没问题啊 X09KW1%Z{ 8FKU~3TPVB9XT

哦 好像确实,我忘记我当时用的哪个版本了,直通卡没有问题,插上usb移动固态硬盘被识别为内部硬盘

snailium commented 1 year ago

为什么旧版本的引导不会有这样的问题呢,是不是考虑对3622这种原生支持hba卡的设备做出一些区别

23.9.0出问题是因为上游合并了Broadcom的mpt3sas驱动(41.00.00.00,详见 #209 ) 23.9.2又回滚了驱动,改回了Synology的mpt3sas驱动(09.102.00.00)

但是,我发现23.9.2的ARPL部分用了更新的mpt3sas驱动(41.03.00.00)

wjz304 commented 1 year ago

睡觉去了,脑壳疼

snailium commented 1 year ago

尝试手工把maxdisks设置成26(ARPL里面添加synoinfo项,然后重新build loader)也是一样的结果,丢两个盘,猜测是internalportcfg="0x1ff"导致的。

看了一下disks addon的install.sh,好像计算顺序有些问题。

  1. 首先,script根据 sd* 推算出 maxdisks 数值(line 227)
  2. 然后,script根据上一步的maxdisks数值,计算出internalportcfg(line 249)
  3. 之后,如果有HBA存在,则将maxdisks设置成26(line 257)

第三个是不是要放在第二个之前?

另外,第三个还有个前置条件,就是没有在synoinfo.conf里面发现maxdisks,才会将maxdisks设置成26。我不确定synoinfo.conf是在什么时候生成的,原始的文件里面是否有maxdisks这一项,如果有的话,disks addon会完全遵从原始文件里面的值,不会根据具体的环境情况更新maxdisks值,不知道这个是不是与设计的功能一致


更新:强行在ARPL的synoinfo里面添加

maxdisks: "26" internalportcfg: "0x3ffffff"

之后,全部硬盘识别正常了。

但是之前消失的两块硬盘显示“已损毁”,需要重新进行修复。修复完成之后我再观察一下,看看重启之后会不会再丢盘。

snailium commented 1 year ago

@aarpro Try to add the following items to synoinfo in ARPL loader menu.

maxdisks: "26" internalportcfg: "0x3ffffff"

And see if you can get all hard drives detected.

snailium commented 1 year ago

更新一下,手工设置internalportcfg之后,完成修复之后, 重启还是会丢盘,跟之前23.9.0的现象一样,启动时,5块硬盘里只有3块硬盘被识别,另外2块硬盘在启动之后陆续被识别,并在存储管理器里面显示为“已损毁”

屏幕截图 2023-09-14 093750

$ ls -d /dev/sd* /dev/sdg /dev/sdg2 /dev/sdg5 /dev/sdh1 /dev/sdh5 /dev/sdi1 /dev/sdi5 /dev/sdj1 /dev/sdj5 /dev/sdk1 /dev/sdk5 /dev/sdg1 /dev/sdg3 /dev/sdh /dev/sdh2 /dev/sdi /dev/sdi2 /dev/sdj /dev/sdj2 /dev/sdk /dev/sdk2

$ lspci -d ::106 0000:00:1f.2 Class 0106: Device 8086:2922 (rev 02) 0001:09:00.0 Class 0106: Device 1b4b:9235 (rev 11) 0001:0c:00.0 Class 0106: Device 1b4b:9235 (rev 11)

$ lspci -d ::107 0000:01:00.0 Class 0107: Device 1000:0086 (rev 05)

$ ls -l /sys/class/scsi_host total 0 lrwxrwxrwx 1 root root 0 Sep 14 09:26 host0 -> ../../devices/pci0000:00/0000:00:1f.2/ata1/host0/scsi_host/host0 lrwxrwxrwx 1 root root 0 Sep 14 09:26 host1 -> ../../devices/pci0000:00/0000:00:1f.2/ata2/host1/scsi_host/host1 lrwxrwxrwx 1 root root 0 Sep 14 09:26 host2 -> ../../devices/pci0000:00/0000:00:1f.2/ata3/host2/scsi_host/host2 lrwxrwxrwx 1 root root 0 Sep 14 09:26 host3 -> ../../devices/pci0000:00/0000:00:1f.2/ata4/host3/scsi_host/host3 lrwxrwxrwx 1 root root 0 Sep 14 09:26 host4 -> ../../devices/pci0000:00/0000:00:1f.2/ata5/host4/scsi_host/host4 lrwxrwxrwx 1 root root 0 Sep 14 09:26 host5 -> ../../devices/pci0000:00/0000:00:1f.2/ata6/host5/scsi_host/host5 lrwxrwxrwx 1 root root 0 Sep 14 09:26 host6 -> ../../devices/pci0000:00/0000:00:1c.0/0000:01:00.0/host6/scsi_host/host6 lrwxrwxrwx 1 root root 0 Sep 14 09:26 host7 -> ../../devices/pci0000:00/0000:00:1a.7/usb1/1-1/1-1:1.0/host7/scsi_host/host7

$ cat /etc/synoinfo.conf | grep portcfg esataportcfg="0x00" internalportcfg="0x3ffffff" usbportcfg="0x00" sataportcfg="0x00"

$ cat /etc/synoinfo.conf | grep disks maxdisks="26"

wjz304 commented 1 year ago

我觉得可能是你最初的时候这两个盘有问题,在安装的时候就没对这两个做初始化,导致后面识别的时候一直处于为初始化的状态

snailium commented 1 year ago

我觉得可能是你最初的时候这两个盘有问题,在安装的时候就没对这两个做初始化,导致后面识别的时候一直处于为初始化的状态

从23.8.6升级上来的,之前的版本没问题。

snailium commented 1 year ago
$ sudo diff /etc/synoinfo.conf.IvxJ5i /etc/synoinfo.conf
55c55
< internalportcfg="0x3ffff"
---
> internalportcfg="0x3ffffff"
72c72
< maxdisks="18"
---
> maxdisks="26"
300c300
< usbportcfg="0xf00000"
---
> usbportcfg="0x00"
441a442
> sataportcfg="0x00"

这个maxdisks="18"是DS3622xs+的默认设置么?

另外,目前dmesg | grep mpt不显示SAS硬盘了。mpt3sas的版本又变成22.00.02.00了

$ dmesg | grep mpt
[    2.011551] mpt3sas version 22.00.02.00 loaded
[    2.016735] mpt2sas_cm0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (8097276 kB)
[    2.090036] mpt2sas_cm0: IOC Number : 0
[    2.090217] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
[    2.106255] mpt2sas0-msix0: PCI-MSI-X enabled: IRQ 34
[    2.106435] mpt2sas0-msix1: PCI-MSI-X enabled: IRQ 35
[    2.106616] mpt2sas0-msix2: PCI-MSI-X enabled: IRQ 36
[    2.106785] mpt2sas0-msix3: PCI-MSI-X enabled: IRQ 37
[    2.106968] mpt2sas_cm0: iomem(0x00000000c2040000), mapped(0xffffc900002a0000), size(65536)
[    2.107266] mpt2sas_cm0: ioport(0x000000000000d000), size(256)
[    2.180049] mpt2sas_cm0: IOC Number : 0
[    2.180215] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
[    2.226213] mpt2sas_cm0: Allocated physical memory: size(15899 kB)
[    2.226422] mpt2sas_cm0: Current Controller Queue Depth(8056), Max Controller Queue Depth(8192)
[    2.226706] mpt2sas_cm0: Scatter Gather Elements per IO(128)
[    2.284026] mpt2sas_cm0: LSISAS2308: FWVersion(20.00.07.00), ChipRevision(0x05), BiosVersion(07.39.02.00)
[    2.284366] mpt2sas_cm0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)
[    2.284985] mpt2sas_cm0: SAS3 Controller found on slot 0000:00:1c.0
[    2.285567] mpt2sas_cm0: sending port enable !!
[    2.286811] mpt2sas_cm0: hba_port entry: ffff88027aab5bc0, port: 255 is added to hba_port list
[    2.289682] mpt2sas_cm0: host_add: handle(0x0001), sas_addr(0x500304801b3d9400), phys(8)
[    2.307026] mpt2sas_cm0: port enable: SUCCESS

lsscsi的结果倒是没有错

$ lsscsi
[6:0:0:0]    disk    ATA      CT480BX500SSD1           R013  /dev/sdg
[6:0:1:0]    disk    SEAGATE  ST4000NM0023             GS15  /dev/sdh
[6:0:2:0]    disk    SEAGATE  ST4000NM0023             GS15  /dev/sdi
[6:0:3:0]    disk    SEAGATE  ST4000NM0023             GS15  /dev/sdj
[6:0:4:0]    disk    SEAGATE  ST4000NM0023             GS15  /dev/sdk
[7:0:0:0]    disk    QEMU     QEMU HARDDISK            2.5+  /dev/synoboot
aarpro commented 1 year ago

@aarpro Try to add the following items to synoinfo in ARPL loader menu.

maxdisks: "26" internalportcfg: "0x3ffffff"

And see if you can get all hard drives detected.

I can check it only few hours later

aarpro commented 1 year ago
mpt3sas的版本又变成22.00.02.00了

what releases ARPL / addons I need download what mpt3sas was a worker (22.00.02.00) ?

aarpro commented 1 year ago

睡觉去了,脑壳疼

maybe you can leave in new release ols version mpt3sas (22.00.02.00) (If the problem is this)?

aarpro commented 1 year ago

Also, for information: after update to releases ARPL 23.9.0 (I don't remember from what release I was updated) HDD/SSD microcode version detecting error. on 23.8.9 version and before everything was fine. Also earlier all serial disks numbers were detected by DSM and displayed correctly (as now)

2023-09-14_193018

snailium commented 1 year ago

I suspect something update broke the mpt3sas.

Thanks for the beauty of Proxmox, I have a old backup of arpl.img (v23.6.7). It used latest Broadcom mpt3sas (41.00.00.00) and has no issue. All HDD, including SATA and SAS are detected right from the beginning.

我利用之前备份的arpl.img返回23.6.7版本之后,尽管mpt3sas用的是Broadcom最新的版本,所有的硬盘都在启动的那一刻被检测到了。

应该不是mpt3sas驱动的问题,而是更深层的问题。

$ dmesg | grep mpt
[    2.043651] mpt3sas version 41.00.00.00 loaded
[    2.045426] mpt2sas_cm0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (8097276 kB)
[    2.117043] mpt2sas_cm0: IOC Number : 0
[    2.117210] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
[    2.132822] mpt2sas0-msix0: PCI-MSI-X enabled: IRQ 34
[    2.133022] mpt2sas0-msix1: PCI-MSI-X enabled: IRQ 35
[    2.133189] mpt2sas0-msix2: PCI-MSI-X enabled: IRQ 36
[    2.133355] mpt2sas0-msix3: PCI-MSI-X enabled: IRQ 37
[    2.133522] mpt2sas_cm0: iomem(0x00000000c2040000), mapped(0xffffc90000280000), size(65536)
[    2.133809] mpt2sas_cm0: ioport(0x000000000000d000), size(256)
[    2.206033] mpt2sas_cm0: IOC Number : 0
[    2.206175] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
[    2.243045] mpt2sas_cm0: scatter gather: sge_in_main_msg(1), sge_per_chain(9), sge_per_io(128), chains_per_io(15)
[    2.243643] mpt2sas_cm0: request pool(0xffff880076400000) - dma(0x76400000): depth(8320), frame_size(128), pool_size(1040 kB)
[    2.258144] mpt2sas_cm0: sense pool(0xffff880266800000) - dma(0x266800000): depth(8059), element_size(96), pool_size (755 kB)
[    2.258559] mpt2sas_cm0: reply pool(0xffff88026b800000) - dma(0x26b800000): depth(8384)frame_size(128), pool_size(1048 kB)
[    2.258960] mpt2sas_cm0: config page(0xffff88027be95000) - dma(0x27be95000): size(512)
[    2.259239] mpt2sas_cm0: Allocated physical memory: size(18510 kB)
[    2.259449] mpt2sas_cm0: Current Controller Queue Depth(8056), Max Controller Queue Depth(8192)
[    2.319515] mpt2sas_cm0: LSISAS2308: FWVersion(20.00.07.00), ChipRevision(0x05)
[    2.319775] mpt2sas_cm0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)
[    2.320383] mpt3sas 0000:01:00.0: Enabled Extended Tags as Controller Supports
[    2.321192] mpt2sas_cm0: sending port enable !!
[    2.323474] mpt2sas_cm0: hba_port entry: ffff880270b18100, port: 255 is added to hba_port list
[    2.326568] mpt2sas_cm0: host_add: handle(0x0001), sas_addr(0x500304801b3d9400), phys(8)
[    2.329597] mpt2sas_cm0: handle(0xa) sas_address(0x4433221103000000) port_type(0x1)
[    2.330828] mpt2sas_cm0: handle(0xb) sas_address(0x5000c50056891ce1) port_type(0x1)
[    2.332028] mpt2sas_cm0: port enable: SUCCESS
[    2.332081] mpt2sas_cm0: handle(0x9) sas_address(0x5000c50056896795) port_type(0x1)
[    2.332985] mpt2sas_cm0: detecting: handle(0x000d), sas_address(0x5000c50056896855), phy(6)
[    2.333310] mpt2sas_cm0: REPORT_LUNS: handle(0x000d), retries(0)
[    2.334075] mpt2sas_cm0: TEST_UNIT_READY: handle(0x000d), lun(0)
[    2.334530] mpt2sas_cm0: handle(0xd) sas_address(0x5000c50056896855) port_type(0x1)
[    2.356831]  end_device-6:0: mpt3sas_transport_port_add: added: handle(0x000a), sas_addr(0x4433221103000000)
[    2.361055]  end_device-6:1: mpt3sas_transport_port_add: added: handle(0x000d), sas_addr(0x5000c50056896855)
[    2.362549] mpt2sas_cm0: detecting: handle(0x000c), sas_address(0x5000c500568970b9), phy(4)
[    2.362550] mpt2sas_cm0: REPORT_LUNS: handle(0x000c), retries(0)
[    2.363041] mpt2sas_cm0: TEST_UNIT_READY: handle(0x000c), lun(0)
[    2.363280] mpt2sas_cm0: handle(0xc) sas_address(0x5000c500568970b9) port_type(0x1)
[    2.436182]  end_device-6:2: mpt3sas_transport_port_add: added: handle(0x000b), sas_addr(0x5000c50056891ce1)
[    2.439603]  end_device-6:3: mpt3sas_transport_port_add: added: handle(0x000c), sas_addr(0x5000c500568970b9)
[    2.521728]  end_device-6:4: mpt3sas_transport_port_add: added: handle(0x0009), sas_addr(0x5000c50056896795)
wjz304 commented 1 year ago

https://github.com/wjz304/arpl-modules/releases 或者可以去下载对应旧版本的 modules 在更新中通过本地上传的方式更新。 (modules的版本和arpl的版本并不是相等的关系)

aarpro commented 1 year ago

https://github.com/wjz304/arpl-modules/releases 或者可以去下载对应旧版本的 modules 在更新中通过本地上传的方式更新。 (modules的版本和arpl的版本并不是相等的关系)

if I can use you ARPL 23.9.1 and addon 23.8.4 (For example) - Will this work correctly? or i need ARPL 23.8.9 - In my opinion, this is the latest version working correctly...

I understand that the problem is with driver mpt3sas - But why did it break? (I am very weak in linux :( )

wjz304 commented 1 year ago

if I can use you ARPL 23.9.1 and addon 23.8.4 (For example) - Will this work correctly? or i need ARPL 23.8.9 - In my opinion, this is the latest version working correctly...

Yes, usually it can

I understand that the problem is with driver mpt3sas - But why did it break? (I am very weak in linux :( )

Due to recent attempts to solve the issue with DT model HBA, the driver is changing

snailium commented 1 year ago

我对比了一下mpt3sas驱动的日志

23.6.7

[    2.356831]  end_device-6:0: mpt3sas_transport_port_add: added: handle(0x000a), sas_addr(0x4433221103000000)
[    2.361055]  end_device-6:1: mpt3sas_transport_port_add: added: handle(0x000d), sas_addr(0x5000c50056896855)
[    2.436182]  end_device-6:2: mpt3sas_transport_port_add: added: handle(0x000b), sas_addr(0x5000c50056891ce1)
[    2.439603]  end_device-6:3: mpt3sas_transport_port_add: added: handle(0x000c), sas_addr(0x5000c500568970b9)
[    2.521728]  end_device-6:4: mpt3sas_transport_port_add: added: handle(0x0009), sas_addr(0x5000c50056896795)

23.9.0

[ 2.728611] end_device-6:0: mpt3sas_transport_port_add: added: handle(0x000a), sas_addr(0x4433221103000000)
[ 17.793545] end_device-6:1: mpt3sas_transport_port_add: added: handle(0x000d), sas_addr(0x5000c50056896855)
[ 32.896036] end_device-6:2: mpt3sas_transport_port_add: added: handle(0x000b), sas_addr(0x5000c50056891ce1)
[ 47.975743] end_device-6:3: mpt3sas_transport_port_add: added: handle(0x000c), sas_addr(0x5000c500568970b9)
[ 63.072647] end_device-6:4: mpt3sas_transport_port_add: added: handle(0x0009), sas_addr(0x5000c50056896795)

也就是说,硬盘在被识别之后,转交给系统的这一过程,老版本是基本上瞬间完成的,新版本需要15秒的时间。那么问题就是,这15秒的时间是卡在哪个组件里面了?

snailium commented 1 year ago

https://github.com/wjz304/arpl-modules/releases 或者可以去下载对应旧版本的 modules 在更新中通过本地上传的方式更新。 (modules的版本和arpl的版本并不是相等的关系)

if I can use you ARPL 23.9.1 and addon 23.8.4 (For example) - Will this work correctly? or i need ARPL 23.8.9 - In my opinion, this is the latest version working correctly...

I understand that the problem is with driver mpt3sas - But why did it break? (I am very weak in linux :( )

My best guess is that the SAS HBA issue is not related to addon or modules, but related to LKMs.

I'll do some test later today.

aarpro commented 1 year ago

So... Clean system... install ARPL 23.8.11, and added manually addons 23.8.10 (By the way, I did not find where see version of addons used in ARPL) all SSD/HDD is OK, shows incorrect serial numbers on LSI SAS 9211-4i controller

2023-09-14_223551

I am ready for experiments, if something changes on ARPL 23.9.1(2) or over release

wjz304 commented 1 year ago

I was working on other projects last night, so I'll take a look tonight

snailium commented 1 year ago

I did some experiment.

Starting from 23.6.7.

  1. Only update ARPL to 23.9.1, everything is fine.
  2. Update ARPL to 23.9.1 and update module to latest (23.9.0?), I see corrupted HDD and 15 seconds gap between HDD handover to DSM. Only these modules are loaded: mpt3sas, virtio, virtio_pci, virtio_ring, vmxnet3.
  3. Update ARPL to 23.9.1 and update LKMs to latest (23.9.1?), everything is fine.
  4. ARPL 23.9.1 + latest LKM + latest addon (23.9.1?) with forcing maxdisks, everything is fine.

It seems the problem is in modules.


EDIT:

I compared the following modules


EDIT2:

The following combination works!

For people experiencing HDD corruption, try manual upload modules 23.8.5. https://github.com/wjz304/arpl-modules/releases/tag/23.8.5

aarpro commented 1 year ago

The following combination works!

  • Update ARPL 23.9.1 (in ARPL update menu, need reboot)
  • Update LKMs 23.9.1 (in ARPL update menu)
  • Update addons 23.9.1 (in ARPL update menu)
  • Update modules 23.8.5 (manual upload)
  • Build loader

confirm it

Advantech ASMB-260I-21A1/Atom C3558/DS3622/DSM 7.2-64570 Update 3 ARPL 23.9.1/Modules 23.8.5

All SSD/HDD are correctly, no systems error on last HDD

but HDD/SSD microcode version detecting error indicated not done (https://github.com/wjz304/arpl-i18n/issues/199#issuecomment-1719787707).

You can do it: just don't display this column in Storage manager HDD/SSD - If not visible, there is no problem :) 2023-09-15_182252

snailium commented 1 year ago

but HDD/SSD microcode version detecting error indicated not done (#199 (comment)).

I guess you need "hdddb" addon.

aarpro commented 1 year ago

but HDD/SSD microcode version detecting error indicated not done (#199 (comment)).

I guess you need "hdddb" addon.

hdddb is install 2023-09-15_185100

wjz304 commented 1 year ago

https://github.com/wjz304/arpl-modules/releases/tag/23.9.2 plz test it

wjz304 commented 1 year ago

but HDD/SSD microcode version detecting error indicated not done (#199 (comment)).

I guess you need "hdddb" addon.

hdddb is install 2023-09-15_185100

hdddb 作者更新了,导致之前的版本用不了了,更新addons到最新版

aarpro commented 1 year ago

https://github.com/wjz304/arpl-modules/releases/tag/23.9.2 plz test it

only module 23.9.2 upadate test ? or ARPL 23.9.2 too ?

aarpro commented 1 year ago

https://github.com/wjz304/arpl-modules/releases/tag/23.9.2 plz test it

ASRock J3355B-ITX/Celeron J3355/DS918+/DSM 7.2-64570 Update 3 2 SSD cache on motherboard SATA, 4 HDD on LSI SAS 9211-4i controller
enable prerelease --> Fuul all update on ARPL 23.9.2 (ARPL 23.9.2, addons 23.9.3, modules 23.9.2, LKM 23.9.1)

YES !!!! its is OK !!!

2023-09-15_223346

2023-09-15_223722

only storage Pool starts optimizing in the background.. but it is no problems...

2023-09-15_223833

wjz304 commented 1 year ago

OK

aarpro commented 1 year ago

Advantech ASMB-260/Atom C3558/DS3622/DSM 7.2-64570 Update 3 2 SSD cache on motherboard SATA, 4 HDD on LSI SAS 9211-4i controller enable prerelease --> Fuul all update on ARPL 23.9.2 (ARPL 23.9.2, addons 23.9.3, modules 23.9.2, LKM 23.9.1)

HDD/SSD ok - no any systems error on HDD

2023-09-15_225849

but HDD/SSD microcode version detecting error indicated not done I'll try to fuul remove hdddb addons now

upd I remove hdddb addons and instal again - HDD version detection error indicated still the same

upd1 install hdddb addons 23.8.5 - HDD version detection error indicated still the same

upd2 return addons 23.9.3 - HDD version detection error indicated still the same

upd3 return addons 23.9.1 - HDD version detection error indicated still the same :(

snailium commented 1 year ago

Confirmed module 23.9.2 fixes SAS HBA issue! Thank you @wjz304 !

Proxmox VM + LSI 9207-8i

My loader configuration now

No manual upload needed.

wjz304 commented 1 year ago

modules 23.9.2 还有一个其他的修改,等另一个问题反馈以后再发布release吧 There is one other modification to modules 23.9.2. Please wait for feedback on another issue before releasing the release

wjz304 commented 1 year ago

Advantech ASMB-260/Atom C3558/DS3622/DSM 7.2-64570 Update 3 2 SSD cache on motherboard SATA, 4 HDD on LSI SAS 9211-4i controller enable prerelease --> Fuul all update on ARPL 23.9.2 (ARPL 23.9.2, addons 23.9.3, modules 23.9.2, LKM 23.9.1)

HDD/SSD ok - no any systems error on HDD

2023-09-15_225849

but HDD/SSD microcode version detecting error indicated not done I'll try to fuul remove hdddb addons now

upd I remove hdddb addons and instal again - HDD version detection error indicated still the same

upd1 install hdddb addons 23.8.5 - HDD version detection error indicated still the same

upd2 return addons 23.9.3 - HDD version detection error indicated still the same

upd3 return addons 23.9.1 - HDD version detection error indicated still the same :(

The hdddb addon was written by @007revad, and I am not very familiar with this part. Do you have a screenshot of "HDD version detection error" ?

aarpro commented 1 year ago

Do you have a screenshot of "HDD version detection error" ?

2023-09-14_193018

wjz304 commented 1 year ago

Let me take a look tomorrow. I need to take a break now and go have breakfast

wjz304 commented 1 year ago

now, Beijing time AM 06:00 ...

snailium commented 1 year ago

now, Beijing time AM 06:00 ...

辛苦了

PeterSuh-Q3 commented 1 year ago

Let me take a look tomorrow. I need to take a break now and go have breakfast

请不要过度劳累,照顾好自己的身体。 我期待着好的结果。

007revad commented 1 year ago

Do you have a screenshot of "HDD version detection error" ?

2023-09-14_193018

Can you run the following command via SSH and reply with the output: sudo -i /usr/bin/hdddb.sh -n