Open geerlingguy opened 4 years ago
It's an older white paper, sir, but it still checks out: Maximize SATA Value with SAS Controllers.
I also wanted to leave a note to myself—now that I have a PC power supply, I think I might switch to using it to power everything—floppy connector to the power connector on the IO board, SATA power to each of the drives, and if I use the IO Crest PCIe switch, I think it also uses a floppy connector for power... easier to manage than the menagerie of power cables and plugs I'm currently using.
First attempt:
$ dmesg
...
[ 1.007786] brcm-pcie fd500000.pcie: host bridge /scb/pcie@7d500000 ranges:
[ 1.007823] brcm-pcie fd500000.pcie: No bus range found for /scb/pcie@7d500000, using [bus 00-ff]
[ 1.007898] brcm-pcie fd500000.pcie: MEM 0x0600000000..0x0603ffffff -> 0x00f8000000
[ 1.007969] brcm-pcie fd500000.pcie: IB MEM 0x0000000000..0x00ffffffff -> 0x0100000000
[ 1.324889] brcm-pcie fd500000.pcie: link down
Trying a sudo reboot
resulted in the same thing. Then trying sudo shutdown
then a power start via jumper on the last two pins of J2 also resulted in the same thing. Going to get the PCIe switch out and see if I can get anything different...
Note that on the LSI card, LED CR3 is lit green. Not sure what that means though :P
With the IO Crest switch connected, I see it... but no LSI card. I have two of them, might as well try the other...
$ lspci
00:00.0 PCI bridge: Broadcom Limited Device 2711 (rev 20)
01:00.0 PCI bridge: Pericom Semiconductor PI7C9X2G304 EL/SL PCIe2 3-Port/4-Lane Packet Switch (rev 05)
02:01.0 PCI bridge: Pericom Semiconductor PI7C9X2G304 EL/SL PCIe2 3-Port/4-Lane Packet Switch (rev 05)
02:02.0 PCI bridge: Pericom Semiconductor PI7C9X2G304 EL/SL PCIe2 3-Port/4-Lane Packet Switch (rev 05)
The other card does this fun rhythmic pulsing thing when I use my external power supply for the PCIe switch, and the LED on the 2A was also showing that weak "ow, my power is hurting" look. I'm gonna need a better power supply I think. I have one... but it's annoyingly bulky :P
Weird that the two identical cards behave differently here. But seems like it could be power.
Power-wise, the card's looking a lot healthier:
But alas, still nothing showing up on the Pi yet (link is down
). Will keep trying...
Also just leaving a note here: I bought a Redragon RGPS 600W Full Modular power supply. And then realized unless you buy a 'bench' supply, the thing waits for a signal from the motherboard to switch on.
I had to look up a diagram of the motherboard connector (24 pin version), and discover that you can put a jumper between PS_ON and any Common wire (see red box) to simulate power on:
The jumper has to remain in place the whole time you want the supply powered up, so in my case, I used breadboard jumper wire, and kind of kinked the ends I stripped off so they'd have enough friction to stay in on their own.
It might be nice for me to find a power supply connector and build a little plug that just jumps the two wires on the PSU itself, so I don't have to have a giant ball of cables coming out.
The good news? Now I'm not lacking for any power supply connectors, though having to use 4-pin molex to Floppy connectors on some of the devices is highly annoying.
I ordered a couple other PCIe risers with external PSU connections, and one of them even has three options (molex, floppy, or 6-pin PCI power connector). Maybe between the four of these things, one of them will get the LSI board to power up all the way.
Fingers crossed, but it'll be some time before it arrives!
YouTube video is up: Enterprise SAS RAID ...on a RASPBERRY PI?!
Still, I have a tiny bit of hope for these cards... it could be that both are dead and I just need to grab a newer one :/
Have you tested these SAS cards in a traditional PC? I'd probably start there to rule out bad hardware given that server pulls have usually had a pretty hard life and RAID cards are high failure items. I am running an LSI SAS2008 PCI-Express Fusion-MPT SAS-2 in my home NAS, and I had to cover one of the pins with Kapton tape before it would function in my desktop motherboard. More details here: https://forums.unraid.net/topic/27724-solved-perc-h310-causing-system-not-to-boot/?tab=comments#comment-258522
Haha, nice catch on the pin. And yeah, it would probably be a good idea to drop the cards in a PC to verify they actually work. I just don't have one at my house right now. Might be a good excuse to get one of those 'old fashioned' PC cases :D
Any chance covering the SMBus clock and data pins might help?
(oops I see lachesis already mentioned this)
Also, very worthwhile looking through dmesg
output to see if the kernel sees them at all and is failing to initialize the hardware. Also sometimes kernel boot args are needed with these kind of things, could try iommu=soft
First I must admit I am guessing here, sadly I don't have that hardware here to do some testing.
The green LED of the first card in the YT could be the normal heartbeat LED present on non-server cards, too.
Have you tried a PCI rescan? Maybe the LSI just takes a little bit longer to initialize.
Pi OS leaks a lot of drivers. You should try fedora. You can do a normal install with rpi4 EFI boot or just use u-boot. Try both.
Just noting here since I left some info in the YT comments but forgot to cross-post:
dmesg
quite frequently (pretty much a hundred times a day as I'm testing all these devices)—with the LSI cards it always just shows link down
during the short initial PCIe steps near the top of the logs.is it possible that there is not enough bar space to address the sas controller or is the firmware on the sas controller think that if all 4 of the pcie link is not available they will just refuse to work or buy a sas card with 1 sas connector instead since that would require less bandwidth that could work and have you try update to the latest linux kernel
To the extent of the previous comment, as soon as I get my CM4 IO board (and CM4, of course) I plan to test with an Adaptec 6405-E card. It has a single SAS 2 connector, uses a x1 PCI-e link natively, and uses the aacraid driver already in the kernel tree. I already got the card, just waiting for Pimoroni to ship me the CM4 goodies.
If it were a BAR issue, it would've shown up in the dmesg logs—but the Pi is basically acting like it sees no card at all (thus the 'link down' message). Could be the card is trying to start up and needs BIOS or something along those lines.
a quick google on the IBM ServeRAID family of drive controllers indicate that they have some issues that happen even when they are installed in x86 desktop computers
PAL is a set of code in typical BIOSes for accessing hardware directly. It's rarely used except when doing direct access to a device's hardware (for firmware updates most commonly).
99.9% of UEFIs out there have the PAL removed (along with a crapload of other units from BIOSes) to make room for the UEFI shell. An alternative to PAL is to use the UEFI shell and an application that you run in the UEFI shell to ensure direct connection to the hardware without anything going wrong. ~ copypasta from a forum that is no longer online
The 'link down' message is printed after it waits 100ms for the link to come up. That's controlled by the following line. You could try just increasing that time, to like a second.
@elFarto ah, good idea. I'll give that a try later.
Even with that, especially after hearing some of the BIOS / early-gen SAS card woes here, in YouTube comments, and on some other forums I've been searching, I'm not super hopeful for these LSI cards. We'll see, though! Right now I'm trying to do some more work debugging what's going on with a PCI switch, as it causes some cards to behave differently.
Could be the card is trying to start up and needs BIOS or something along those lines.
I have been doing a little light research into option ROMs and it certainly sounds like even if the card was initialised through an option ROM it should still be possible to do the initialisation from a driver after the kernel has booted, the option ROM initialisation is only needed to do things that require the card to be initialised during POST (e.g. booting from a RAID volume on the card or network boot from a network card etc., or displaying POST messages on a graphics card which is why some ARM servers apparently have issues with external GPUs where they do not work until the kernel has loaded the driver).
I have some HP P400 smart array cards but I don't have a cm4 to test them with or I'd give it a go although I'd expect simmilar results, especially since the storage arrays are actually configured from the option ROM.
It would definitely be useful to test the cards in a regular desktop, that way you can rule out hardware or firmware issues with the cards, and you could also try them in single lane PCIe slots to see if they are refusing to initialised without a large enough number. Even an old office desktop from ebay should suffice for testing...
According to documentation I found, CR3 and CR4 represent a heartbeat signal and are supposed to flash to indicate normal operation. I have not seen anywhere that describes what it means to be on solid.
@kohenkatz Maybe solid on means "I enjoy interfacing with this Raspberry Pi, but I will not work with it."
With a lot of these SAS cards, you absolutely need to reflash the firmware for them to work outside of the server they came from, and for that, you need to stick them in an x86 box.
I'd be happy to do that for you (and make a quick video of the process if you like), but posting them to Belgium and back would likely get pricey. Your other option is to do some research for instructions on changing the card into "IT mode" and then (if you want hardware raid) back into "IR mode"
Also, the BR10i is an older card; so it may not be worth using even if you do get it working.
Note: I am in contact with a few engineers who, let's just say, know a lot more than I do about LSI cards. We're going to do a little more debugging, and I may also be able to acquire a few other cards of similar vintage which might have a better chance of working.
Also dropping just for the general knowledge: IBM ServeRAID M1015 Part 4: Cross flashing to a LSI9211-8i in IT or IR mode; and a similar article on this specific card, IBM ServeRAID BR10i LSI SAS3082E-R PCI-Express SAS RAID Controller.
From the guide:
sas2flsh -o -f 2118it.bin -b mptsas2.rom (sas2flsh -o -f 2118it.bin if OptionROM is not needed)
I think if you flashed it without the OptionROM, I use that to speed boot in servers where I do not need to boot from the card to speed up the boot process.
I've managed to get an LSI 9211-8i recognised:
pi@raspberrypi:~ $ sudo lspci -v 00:00.0 PCI bridge: Broadcom Limited Device 2711 (rev 20) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0, IRQ 55 Bus: primary=00, secondary=01, subordinate=01, sec-latency=0 I/O behind bridge: 00000000-00000fff Memory behind bridge: f8000000-f80fffff Capabilities: [48] Power Management version 3 Capabilities: [ac] Express Root Port (Slot-), MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [180] Vendor Specific Information: ID=0000 Rev=0 Len=028 <?> Capabilities: [240] L1 PM Substates Kernel driver in use: pcieport
01:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 02)
Subsystem: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]
Flags: fast devsel
I/O ports at
I then compiled in what I think are the right drivers and got a bit further, I haven't really got a clue what I'm doing though:
58.771619] pci 0000:00:00.0: enabling device (0000 -> 0002) [ 58.771653] mpt3sas 0000:01:00.0: enabling device (0000 -> 0002) [ 58.771713] mpt2sas_cm0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (934084 kB) [ 58.828972] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k [ 58.828981] mpt2sas_cm0: High IOPs queues : disabled [ 58.829042] mpt2sas0: IO-APIC enabled: IRQ 40 [ 58.829052] mpt2sas_cm0: iomem(0x00000006000c0000), mapped(0x00000000efa7e386), size(16384) [ 58.829058] mpt2sas_cm0: ioport(0x0000000000000000), size(0) [ 58.883805] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k [ 58.933309] mpt2sas_cm0: Allocated physical memory: size(1687 kB) [ 58.933324] mpt2sas_cm0: Current Controller Queue Depth(3364),Max Controller Queue Depth(3432) [ 58.933334] mpt2sas_cm0: Scatter Gather Elements per IO(16) [ 74.211968] mpt2sas_cm0: Command Timeout [ 74.211981] mf:
[ 74.211987] 04000000 [ 74.211996] 00000000 [ 74.212005] 00000000 [ 74.212012] 00000000 [ 74.212020] 00000000 [ 74.212028] 09000000 [ 74.212035] 00000000 [ 74.212043] d3000000 [ 74.212050]
[ 74.212058] ffffffff [ 74.212066] ffffffff [ 74.212073] 00000000
[ 74.212102] mpt2sas_cm0: sending diag reset !! [ 75.170970] mpt2sas_cm0: diag reset: SUCCESS [ 75.228715] mpt2sas_cm0: CurrentHostPageSize is 0: Setting default host page size to 4k
And the above error repeats.
Also, I tried an LSI 9260 Hardware RAID card and this did appear to work once I had added the Megaraid driver. I've put this to bed though as even if I get it working, from what I can see there are no arm binaries for megacli so we wouldn't be able to configure it anyway:
[ 109.305695] megaraid_sas 0000:01:00.0: FW now in Ready state [ 109.305717] megaraid_sas 0000:01:00.0: 63 bit DMA mask and 32 bit consistent mask [ 109.308146] megaraid_sas 0000:01:00.0: requested/available msix 1/-28 [ 109.308165] megaraid_sas 0000:01:00.0: current msix/online cpus : (0/4) [ 109.308177] megaraid_sas 0000:01:00.0: RDPQ mode : (disabled) [ 109.373678] megaraid_sas 0000:01:00.0: controller type : MR(512MB) [ 109.373695] megaraid_sas 0000:01:00.0: Online Controller Reset(OCR) : Enabled [ 109.373706] megaraid_sas 0000:01:00.0: Secure JBOD support : No [ 109.373717] megaraid_sas 0000:01:00.0: NVMe passthru support : No [ 109.373728] megaraid_sas 0000:01:00.0: FW provided TM TaskAbort/Reset timeout : 0 secs/0 secs [ 109.373739] megaraid_sas 0000:01:00.0: JBOD sequence map support : No [ 109.373749] megaraid_sas 0000:01:00.0: PCI Lane Margining support : No [ 109.373764] megaraid_sas 0000:01:00.0: megasas_init_mfi: fw_support_ieee=0 [ 109.373860] megaraid_sas 0000:01:00.0: INIT adapter done [ 109.373874] megaraid_sas 0000:01:00.0: JBOD sequence map is disabled megasas_setup_jbod_map 5665 [ 109.429947] megaraid_sas 0000:01:00.0: pci id : (0x1000)/(0x0079)/(0x1000)/(0x9268) [ 109.429956] megaraid_sas 0000:01:00.0: unevenspan support : no [ 109.429963] megaraid_sas 0000:01:00.0: firmware crash dump : no [ 109.429969] megaraid_sas 0000:01:00.0: JBOD sequence map : disabled [ 109.429978] scsi host0: Avago SAS based MegaRAID driver
@PBXForums - Nice!
I haven't really got a clue what I'm doing though
Hehe, welcome to my world—fake it 'till you make it!
There are no arm binaries for megacli so we wouldn't be able to configure it anyway
Slightly unrelated, but I'm actually looking into this separately... I have a feeling it might be possible to find it someday soon at least for arm64...
Hello,
Maybe completely useless info for you, but a while back we used qemu-user-static to wrap a nasty CUPS driver (for a barcode printer if I remember), and it worked pretty great. Though I imagine the situation with megacli is probably a lot tougher, because it likely wants to open the relevant /dev control device, and there might be ugly alignment issues and whatnot. (But at least in theory, it could work!)
There is an open-source flashing tool available at https://github.com/marcan/lsirec; this should be enough to get the card into IT mode, in which case it doesn't need any userspace tools to control it. In this mode, it would be perfectly fine as an HBA.
To take advantage of the RAID features, it may be possible to configure an array on a card using an x86 machine and then just drop that onto the Pi; the array configuration is stored on the card. This does have the notable disadvantage that the array can't be reconfigured online, at least until Megarec is reverse engineered and rebuilt for Linux. It's also worth trying the qemu approach; IIRC, megarec just maps some configuration space on the card into its own memory and then sends commands, so I'd imagine that it would work OK under qemu-user
Getting the 9211i into IT mode is not really a problem as that can be done on another machine, we could also potentially do the same with the LSI hardware RAID controllers too. The problem I see with this approach is thing like replacing a faulty drive etc. I'll maybe take a look at this qemu approach though I haven't a clue where to start yet.
With regards to the LSI 9211 and others I think the problem is probably with the PCIe BAR allocation. I didn't post it above as I didn't spot it until later but there was an issue with allocating some BAR0 stuff that had a messages relating to io.
From reading one of Jeff's other issues it would seem that the CM4 does not support this 'io' type allocation, just 'mem' type so I'm not that hopeful we will get a fix.
This has now prompted me to order a RockPro64 which I have seen does support the 9211 and, if I am correct, this would imply support for both 'mem' and 'io' types. Combine this with the fact that it is PCIe x4 and I'm now thinking this may be a better candidate for the job I want and may also have better PCIe support for more hardware.
I have a SAS2008 card in a rockpro64, running for years...
I am getting a second one off eBay (delivery this week) and I will give it a try. I have an UEFI on my CM4 so I want to check OptionROM + IR mode.
Ok, so
root@ubuntu:~# lspci
00:00.0 PCI bridge: Broadcom Inc. and subsidiaries BCM2711 PCIe Bridge (rev 20)
01:00.0 SCSI storage controller: Broadcom / LSI SAS1068E PCI-Express Fusion-MPT SAS (rev 08)
root@ubuntu:~# lsblk -o NAME,TRAN,HCTL|grep sas
sdb sas 1:0:0:0
sdc sas 1:0:1:0
root@ubuntu:~# smartctl -A /dev/sdc
smartctl 7.2 2020-12-30 r5155 [aarch64-linux-5.12.0-rc5-00033-g164004f45154] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 100 100 051 Pre-fail Always - 0
2 Throughput_Performance 0x0026 252 252 000 Old_age Always - 0
3 Spin_Up_Time 0x0023 087 087 025 Pre-fail Always - 3990
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 137
5 Reallocated_Sector_Ct 0x0033 252 252 010 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 252 252 051 Old_age Always - 0
8 Seek_Time_Performance 0x0024 252 252 015 Old_age Offline - 0
9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 34806
10 Spin_Retry_Count 0x0032 252 252 051 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 152
191 G-Sense_Error_Rate 0x0022 100 100 000 Old_age Always - 12
192 Power-Off_Retract_Count 0x0022 252 252 000 Old_age Always - 0
194 Temperature_Celsius 0x0002 064 056 000 Old_age Always - 24 (Min/Max 17/46)
195 Hardware_ECC_Recovered 0x003a 100 100 000 Old_age Always - 0
196 Reallocated_Event_Count 0x0032 252 252 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 252 252 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 252 252 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0036 100 100 000 Old_age Always - 11
200 Multi_Zone_Error_Rate 0x002a 100 100 000 Old_age Always - 12
223 Load_Retry_Count 0x0032 100 100 000 Old_age Always - 3
225 Load_Cycle_Count 0x0032 069 069 000 Old_age Always - 321987
that's for a SAS3082E-R
card.
I am powering this off an ATX power supply, with a custom PCB+microcontroller to control the compute module IO board and power it off the same 12V rails. I've also applied the following MSI-X patch and run a custom 5.12.0-rc5
and I am booting through a custom UEFI (RPi4)
diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c
index e330e6811f0b..46592eca345f 100644
--- a/drivers/pci/controller/pcie-brcmstb.c
+++ b/drivers/pci/controller/pcie-brcmstb.c
@@ -469,7 +469,7 @@ static struct irq_chip brcm_msi_irq_chip = {
static struct msi_domain_info brcm_msi_domain_info = {
/* Multi MSI is supported by the controller, but not by this driver */
- .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS),
+ .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | MSI_FLAG_PCI_MSIX),
.chip = &brcm_msi_irq_chip,
};
The LSI2008 is working nicely on the rockpro64 but I have not been able to get it to power up on the RPI. I got it recognized in UEFI and could boot grub off the card in IR mode but I haven't been able to get the card to work in linux :-( Although I remember trying multiple risers until I got it to work on the rockpro64....
I've got the same card (SAS3082E-R L3-25116-01H // 500605B 0-01BE-A6E0) and seems to be working.
Here is my kernel config.txt
My kernel is a bit more of a mess as I've franken-merged the RPI dts with 5.12 upstream and a squashed V3D enabling patch... But I could share that too if interested. I would guess the only relevant piece is the pcie-brcmstb patch (if it is needed at all). I am running with the lastest upstream devicetree with 1GB PCIe space. The pcie patch removes error messages related to irq alloc (check dmesg).
Let me know if you need anything else to reproduce this.
I've also been trying to get a SAS2008 card working, but I have the exact same issue as @PBXForums - The card is detected just fine, but it constantly gives that config_request error message. No drives are shown with lsscsi.
This is a card flashed with IT firmware that works just fine on a x86 PC. Here is the lspci info I get from the same card on a PC and on the CM4. It's interesting that they get detected as slightly different cards (probably using different versions of mpt3sas) I've tried both using the card with pins B5 and B6 exposed and with tape, but no difference (on this particular PC motherboard I do need to have it blocked with tape for it to work properly).
x86:
04:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
Subsystem: Dell 6Gbps SAS HBA Adapter
Flags: bus master, fast devsel, latency 0, IRQ 62, NUMA node 0
I/O ports at 4000 [size=256]
Memory at ef840000 (64-bit, non-prefetchable) [size=64K]
Memory at ef800000 (64-bit, non-prefetchable) [size=256K]
Expansion ROM at ef700000 [virtual] [disabled] [size=1M]
Capabilities: [50] Power Management version 3
Capabilities: [68] Express Endpoint, MSI 00
Capabilities: [d0] Vital Product Data
Capabilities: [a8] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [c0] MSI-X: Enable+ Count=15 Masked-
Capabilities: [100] Advanced Error Reporting
Capabilities: [138] Power Budgeting <?>
Kernel driver in use: mpt3sas
Kernel modules: mpt3sas
CM4:
01:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
Subsystem: Dell 6Gbps SAS HBA Adapter
Flags: bus master, fast devsel, latency 0, IRQ 65
I/O ports at <unassigned> [disabled]
Memory at 600140000 (64-bit, non-prefetchable) [size=64K]
Memory at 600100000 (64-bit, non-prefetchable) [size=256K]
[virtual] Expansion ROM at 600000000 [disabled] [size=1M]
Capabilities: [50] Power Management version 3
Capabilities: [68] Express Endpoint, MSI 00
Capabilities: [d0] Vital Product Data
Capabilities: [a8] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [c0] MSI-X: Enable+ Count=15 Masked-
Capabilities: [100] Advanced Error Reporting
Capabilities: [138] Power Budgeting <?>
Kernel driver in use: mpt3sas
Kernel modules: mpt3sas
Something interesting about the LSI 2308: on x86, setting pci=realloc
as needed for Thunderbolt seems to make it fall off the PCIe bus between when I can run lspci
in grub and when I can run lspci
in Linux. I wonder if it doesn't like something about BARs, and that's making it unhappy on the Pi CM4?
I'm able to load the mpt2sas/mpt3sas driver for my 2308 just fine on my new Honeycomb LX2K (aarch64), but I don't have a Pi CM4 to try the 2308 in. I also can't test my 2008 anymore, because it seems to have died (darn 12v fuse).
Something to note about the difference between the 2008 and the 2308: the 2308 adds a whole bunch more interrupt queues (though I forgot whether they're MSI or MSI-X).
Is this SAS3082E-R card supposed to work in PCIe 4.0 x1 slots? Doesn't seem it is getting detected (Linux 5.15).
Is this SAS3082E-R card supposed to work in PCIe 4.0 x1 slots? Doesn't seem it is getting detected (Linux 5.15).
check this, mine works over pcie 4 but I needed to update firmware https://github.com/tadghh/SystemX3500M2/blob/main/README.md
I received 2 IBM ServeRAID BR10i / LSI SAS3082E-R SAS RAID controllers from Jacob Hiltz, along with two Supermicro Internal MiniSAS to 4 SATA breakout cables, and I just purchased four refurbished WD 500GB 7200RPM SATA II 3Gb/s 3.5" Hard Drives to see if I can get them working in RAID 0 and RAID 1E (I have a few other SATA drives, but they are different sizes, speeds, etc. and I'd rather take the variable of 'drives that are not the same' out of the equation).
A few helpful references:
Note that I've never set up hardware RAID before, so it'll be a little bit of a learning experience—assuming the card actually works!