Closed geerlingguy closed 2 years ago
Oh man, I missed out on this one hard. I have a 16 drive cluster dual intel server that I would be happy to test this on. (Meaning use the drives) I have spare drives, and the modules to test! This is exactly what I am trying to build. :P
This is really the ultimate rpi NAS board! Can't wait to see that review :)
Let's see if it's time to replace my old Intel i7 w/ 4 sata disk @ Mergerfs/Snapraid setup.
The Radxa Taco board looks like an awesome NAS solution. I'm really happy they are designing it! In the last few days I was considering the Argon EON, but I didn't like their solution for transferring the data over USB 3.0. The Radxa Taco confirmed my doubts. I'll definitely choose the Taco over the EON :) @hipboi do you have plans substituting the USB 2.0 plug with a USB 3.0/3.1/3.2 plug? Hope it comes out soon.
This is VERY exiting!
How would a case, for a setup with SSD, ideally look like? Assuming that hot-swap should be integrated and the SSD's do not dispense too much heat.
Greetings from the north of Germany, too!
I looked at the previous issue #202, where @andyattebery mentioned one issue:
I haven't tested the 2.5GbE controller on Raspberry Pi OS, but it looks like the kernel driver/module isn't loaded, and I don't see the network interface.
I have played with RTL8125B for some time (on x86 of course) and met the same problem. I found a solution on Ask Ubuntu which suggests kernel newer than 5.9 supports the NIC out of the box. Maybe there is more need to be done for Raspberry Pi. Another approach is to try Realtek's driver.
Also, I remember Jeff once tested a 2.5GbE NIC in the Pi vs ASUSTOR video. I don't know whether this card uses RTL8125 or RTL8125B. From my understanding, the former has better compatibility than the latter.
Since I have one on hand, let's get the specifics:
The RTL8125B is the same chip used in other 2.5G network cards I've tested (notably the Rosewill: https://github.com/geerlingguy/raspberry-pi-pcie-devices/issues/40).
To get it working, you'll have to cross-compile the Pi kernel with the following option enabled in menuconfig
:
Device Drivers
> Network device support
> Ethernet driver support
> Realtek devices
> Realtek 8169/8168/8101/8125 ethernet support
Raspberry Pi OS is not likely to add support for various NICs out of the box since it's built to work only with the hardware Raspberry Pi currently builds-in, and very minimal other hardware (I got them to add SATA support but that's pretty universal/generic—these NIC drivers are not).
I was able to get the Realtek driver working without having to recompile the kernel with the module on the newest version of Raspberry Pi OS. Jeff mentions in his post about 2.5GbE he had trouble compiling it, so maybe something has changed.
sudo apt install raspberrypi-kernel-headers
)tar -xf r8125-9.006.04.tar.bz2
)cd r8125-9.006.04; sudo ./autorun.sh
)@andyattebery - Good to know! Did you use 32-bit Pi OS or 64-bit? A lot has changed in the past year, it might install easily on both now.
Both 32-bit and 64-bit work. On one 32-bit installation lspci
wasn't returning anything and the SATA drives weren't showing up. However, I just did a fresh install, updated the packages, installed the driver, and everything is working fine.
@mjeshurun @bydorfler we have the 4x PCIe switch for the CM4, which are used as following:
which do you think not important and want to replace? The idea was to build a NAS with NVMe as cache and fast WiFi streaming.
@mjeshurun @bydorfler we have the 4x PCIe switch for the CM4, which are used as following:
- 1x for 2.5 GbE
- 1x for 5x SATA
- 1x for NVMe
- 1x for WiFi 6
which do you think not important and want to replace? The idea was to build a NAS with NVMe as cache and fast WiFi streaming.
Hi @hipboi , I think all the PCIe switches you mentioned are important. @bydorfler 's suggestion might be a better alternative. To replace 1x SATA port with a PCIe connector. 4x SATA ports are plenty to connect enough TB using SSD/HDD, so an extra PCIe will be much more valuable.
@mjeshurun - With a 4-way PCIe switch, you only get 4 'devices' (the ones listed above).
Unfortunately, you can't take one device (the 5x SATA controller) and hot-wire a PCIe port on top of it—you'd need another PCIe switch (or switch to a PCIe switch chip that handles more like an 8x or 12x, and those cost more money and I would assume are larger in board space).
@geerlingguy are you talking suggestions for tests?
I'd be interested to know if all that PCIe switching limits throughput in the (probably very unlikely) scenario that all interfaces are exhausted simultaneously.
Testing the bandwidth of the block devices on their own and all at the same time would be easy to do with some fio, the 2.5 GbE NIC could be loaded using iperf. Testing the full speed of a Wifi 6 card might be a thing that is more difficult to achieve...
I don't have experience with current Raspberry Pi's and SATA or NVME disks, so I don't have any idea if the PCIe switching or the CPU itself would be the bottleneck.
Looking forward to your review and availability of this board!
@markwort - Don't worry, been doing a lot of tests with both PCIe switches, just NVMe, NVMe RAID, SATA RAID, 10G/dual 2.5G Ethernet, multiple 1G Ethernet, etc. — the bottleneck is always in the CPU + x1 lane, unfortunately.
Maximum we'll get is 3.6 Gbps or so through the bus, if going to the CPU. What would be interesting to see is if some of the devices (network drivers are like this sometimes) can bump things up for traffic that doesn't have to route through the CPU itself.
But I'm planning on putting the pedal to the metal.
The Penta board I have only has M-key, so I'll have to decide whether to put in an NVMe SSD, or an A+E adapter to mount a WiFi 6 card on it.
@mjeshurun - With a 4-way PCIe switch, you only get 4 'devices' (the ones listed above).
Unfortunately, you can't take one device (the 5x SATA controller) and hot-wire a PCIe port on top of it—you'd need another PCIe switch (or switch to a PCIe switch chip that handles more like an 8x or 12x, and those cost more money and I would assume are larger in board space).
Thanks for the explanation 🙏
@geerlingguy thanks for the quick answer! I didn't think that you might have dealt with the same PCIe switches before, but now I remember a remark about these issues from one of your videos.
network drivers are like this sometimes
You're probably thinking of RDMA, which can be supported by something like NFS or SMB ("SMB-Direct"), but that usually requires special NICs, and I see you've stumbled across issues with that in the past already.
Have you tried io_uring for any benchmarks? In your Pi Dramble repo I only found calls to fio using libaio. io_uring apparently supports zero-copy where data is moved directly from user space to the device, without having to copy it first (in memory) into kernel space.
I wouldn't know what to use Wifi on a NAS like that for. To get adequate speeds it can't be any more than a short cable run away from the access point. Maybe someone would want to turn their Pi into a wireless access point with disks built in :shrug:
@markwort - I imagine one use case would be if you only have WiFi available in the spot where you want your NAS—with WiFi 6 and a really good router, and not too much distance (20' max probably), you can achieve gigabit speeds, which is good enough for many.
But I go wired 99% of the time when I care about access speed/performance. Even the best wireless devices are going to run into issues where wired performance will be more stable.
I haven't done anything with io_uring yet.
@geerlingguy Doesn’t the CM4 comes with an option to order a built-in WiFi/Bluetooth chip? Why not rely on that WiFi and thus free one PCIe switch for other tasks?
@mjeshurun - Sure, but it's limited to a maximum of around 70 Mbps (90 if you're lucky and have a really, really clear signal). That's good enough for some use cases :)
First boot—used this 12V 8A power supply with 2.5mm barrel plug, booted into Pi OS 64-bit lite, on a 4 GB CM4 Lite with a Sandisk Extreme 32GB microSD card.
A few initial observations:
lspci
with nothing plugged into anything and system not updated:
After sudo apt dist-upgrade
and reboot:
So after upgrading to the latest 64-bit OS release, SATA seems to get the AHCI driver, and the drive slots should hopefully work. I noticed the board gets a bit toasty. I'll have to measure temps and see if a fan is required for both the topside and underside (or just generally good ventilation—most likely 'yes').
Don't have time to test out NVMe, SATA drives, or the 2.5G Ethernet yet. Looks like at least for the latter, I'll need to install the driver (or recompile the kernel, heh... hopefully the driver just installs gracefully).
Getting Realtek 2.5G NIC working, attempt number 1:
9.006.04
(had to solve an annoying math captcha first).sudo apt-get install -y raspberrypi-kernel-headers
Run:
tar vjxf r8125-9.006.04.tar.bz2
cd r8125-9.006.04/
sudo ./autorun.sh
Check that module is loaded:
$ lsmod | grep r8125
r8125 167936 0
Verify interface comes up in dmesg
:
[ 6274.243955] r8125: loading out-of-tree module taints kernel.
[ 6274.245781] r8125 2.5Gigabit Ethernet driver 9.006.04-NAPI loaded
[ 6274.245884] pci 0000:02:02.0: enabling device (0000 -> 0002)
[ 6274.245904] r8125 0000:04:00.0: enabling device (0000 -> 0002)
[ 6274.265227] r8125 0000:04:00.0 (unnamed net_device) (uninitialized): Invalid ether addr 00:00:00:00:00:00
[ 6274.265241] r8125 0000:04:00.0 (unnamed net_device) (uninitialized): Random ether addr 9a:31:e3:9b:b9:42
[ 6274.265804] r8125: This product is covered by one or more of the following patents: US6,570,884, US6,115,776, and US6,327,625.
[ 6274.267862] r8125 Copyright (C) 2021 Realtek NIC software team <nicfae@realtek.com>
This program comes with ABSOLUTELY NO WARRANTY; for details, please see <http://www.gnu.org/licenses/>.
This is free software, and you are welcome to redistribute it under certain conditions; see <http://www.gnu.org/licenses/>.
[ 6274.344026] eth1: 0xffffffc012680000, 9a:31:e3:9b:b9:42, IRQ 65
[ 6427.668723] r8125: eth1: link up
[ 6427.668784] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Looks like the link is up! Next comment: speed test. Will have to re-plumb my wires a bit to make sure I'm testing over 2.5G and not a 1G switch. I should really get another 10G switch set up in my office so I don't have to go back to my rack for this stuff...
Note: Upgrades may break this installation method (requiring it to be reinstalled every time), so might be better to use
dkms
. Alternatively, maybe we could campaign to get the r8125 driver merged into the Pi kernel :)
Yeah, got tired of not being able to reach one of my servers (an Odroid H2+ with the same chip) after a kernel update, so I decided to do the dkms
automatic kernel module compile thing as described here: https://askubuntu.com/questions/1263363/2-5g-ethernet-linux-driver-r8125-installation-guide/1336708#1336708
Hopefully, the drivers will be included in the kernel soon.
Testing on 2.5 Gbps—one thing I noticed right away, and this may be a bug: neither the amber connection light nor the green activity light on the 2.5 Gbps port light up when I connect it to my 10G switch. Connected to my 1G switch, I get a blinking amber light, but no green light.
So maybe some wires are crossed with the port LEDs to the NIC? It might support multi-mode LEDs for 1/2.5G indication and maybe the board schematic is crossing some connections somewhere.
Anyways, a trip to ip a
shows that the connection is active. And using ethtool
, it looks like it's negotiating the correct speeds (full details folded below):
$ ethtool eth1
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
2500baseT/Full
Speed: 2500Mb/s
Duplex: Full
Some performance testing:
pi@taco:~ $ iperf3 -c 10.0.100.100
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 2.73 GBytes 2.35 Gbits/sec 0 sender
[ 5] 0.00-10.01 sec 2.73 GBytes 2.34 Gbits/sec receiver
pi@taco:~ $ iperf3 -c 10.0.100.100
Interestingly, I'm hitting the same issue I did a few months back, where throughput from the Taco to the Mac is a full 2.35 Gbps, but the opposite direction is around 300 Mbps and fluctuates a lot.
I've seen this problem before, where Ethernet was slower only in one direction on one device, and it turned out the issue was with a FLYPROFiber SFP-10G-T-30M transceiver. I'm going to check if that's the case here.
Edit: lol, yep, that was the model on that port. Off to order more transceivers!
Now comes the fun part... let's see if a Sabrent Rocket 8TB NMVe SSD works in the M.2 slot...
It fits, even though it's a double-sided card (luckily the M.2 port is nice and tall).
The M.2 slot uses an M2.5 screw (and my board didn't come with one). That seemed a little odd since most of my M.2 devices/slots seem to come with M2-size screws (M2 meaning the measurement of the ISO metric screw thread, not 'screw meant for M.2'). I was lucky I had a pack of said screws from my Pi PoE HATs!
pi@taco:~ $ lspci -v
05:00.0 Non-Volatile memory controller: Phison Electronics Corporation E12 NVMe Controller (rev 01) (prog-if 02 [NVM Express])
Subsystem: Phison Electronics Corporation E12 NVMe Controller
Flags: bus master, fast devsel, latency 0, IRQ 65
Memory at 600200000 (64-bit, non-prefetchable) [size=16K]
Capabilities: <access denied>
Kernel driver in use: nvme
pi@taco:~ $ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
mmcblk0 179:0 0 29.7G 0 disk
├─mmcblk0p1 179:1 0 256M 0 part /boot
└─mmcblk0p2 179:2 0 29.5G 0 part /
nvme0n1 259:0 0 7.3T 0 disk
I formatted and mounted the drive to /mnt/nvme
.
Running my disk-benchmark.sh:
pi@taco:~ $ wget https://raw.githubusercontent.com/geerlingguy/raspberry-pi-dramble/master/setup/benchmarks/disk-benchmark.sh
pi@taco:~ $ chmod +x disk-benchmark.sh
pi@taco:~ $ nano disk-benchmark.sh
pi@taco:~ $ sudo DEVICE_UNDER_TEST=/dev/nvme0n1p1 DEVICE_MOUNT_PATH=/mnt/nvme ./disk-benchmark.sh
Results:
Benchmark | Result |
---|---|
fio 1M sequential read | 413 MB/s |
iozone 1M random read | 358 MB/s |
iozone 1M random write | 382 MB/s |
iozone 4K random read | 36.83 MB/s |
iozone 4K random write | 84.01 MB/s |
And next... 5x Samsung 870 QVO 8TB SATA SSDs (and no, that's not a typo... lol):
I noticed that all the drive status LEDs are not only glaringly bright blue (aaah!!), they are also all lined up directly under the middle slot... so good luck seeing them. Not sure if the official case will use some very creative light pipes to show drive status, or if you'll just operate the drives without blinkenlights.
I also noticed the preinstalled headers on the board's edges are basically mm away from the drives once installed. It would be nice to have a smidge more clearance—and 3.5" drives will definitely not fit on this board with those headers in place.
I formatted one drive (sda
) and mounted to /mnt/mydrive
.
I ran my disk-benchmark.sh on one drive:
pi@taco:~ $ sudo DEVICE_UNDER_TEST=/dev/sda1 DEVICE_MOUNT_PATH=/mnt/mydrive ./disk-benchmark.sh
Results:
Benchmark | Result |
---|---|
fio 1M sequential read | 412 MB/s |
iozone 1M random read | 328 MB/s |
iozone 1M random write | 362 MB/s |
iozone 4K random read | 29.57 MB/s |
iozone 4K random write | 56.36 MB/s |
So in all respects, slightly slower than NVMe, but that's to be expected. Way more IOPS on the NVMe drive too. I guess that makes some sense, considering that thing cost twice as much!
Testing out various RAID levels by creating a RAID array in Linux with mdadm.
RAID 0 Results:
pi@taco:~ $ sudo DEVICE_UNDER_TEST=/dev/md0 DEVICE_MOUNT_PATH=/mnt/raid0 ./disk-benchmark.sh
Benchmark | Result |
---|---|
fio 1M sequential read | 416 MB/s |
iozone 1M random read | 350 MB/s |
iozone 1M random write | 386 MB/s |
iozone 4K random read | 30.28 MB/s |
iozone 4K random write | 57.81 MB/s |
RAID 5 Results:
pi@taco:~ $ sudo DEVICE_UNDER_TEST=/dev/md0 DEVICE_MOUNT_PATH=/mnt/raid5 ./disk-benchmark.sh
Benchmark | Result |
---|---|
fio 1M sequential read | 416 MB/s |
iozone 1M random read | 349 MB/s |
iozone 1M random write | 81 MB/s |
iozone 4K random read | 30.40 MB/s |
iozone 4K random write | 14.36 MB/s |
Note: For the RAID 5 array, the initial
resync
seemed to use a lot of CPU (80-85% on one core). By default, it looks like it was using a single CPU thread to manage the syncing process. So I ran the following command to use 2 threads instead, which seemed to spread the load and help make the average speed stabilize around 98-99 MB/sec (it was at 95-97 MB/sec...):$ echo 2 | sudo tee /sys/block/md0/md/group_thread_cnt
I also tried increasing the stripe cache size (
echo 4096 | sudo tee /sys/block/md0/md/stripe_cache_size
, it was256
), but that didn't seem to make a difference, at least not with the resync. From everything I've read aboutresync
optimization, it looks like the bottleneck here is the Pi's poor little PCIe bus, which can only put through ~400 MB/sec (a little more than that, but still)... which means the four reading drives top out around 95-100 MB/sec each (watching the progress withatop
).
Still working on the above comment—but have to wait a bit for the resync. So until that's done, I'm going to take a gander at the work @joshuaboud did in this Pi kernel fork to get ZFS compiling on the Pi.
My plan is to try to get ZFS running after I finish testing the RAID 5 setup, so I can test RAIDZ1 speeds, and then do some network file copy tests maybe with both NFS and SMB.
Could it be easier, though? Could it be as easy as installing ZFS per the official Debian instructions?. I'm not sure if there's an arm64-compatible build in the repos the Pi hits...
$ apt search zfs-dkms
Sorting... Done
Full Text Search... Done
zfs-dkms/testing,testing 2.0.2-1~bpo10+1 all
OpenZFS filesystem kernel modules for Linux
pi@taco:~ $ apt search zfsutils-linux
Sorting... Done
Full Text Search... Done
zfs-dbg/oldstable 0.7.12-2+deb10u2 arm64
Debugging symbols for OpenZFS userland libraries and tools
zfsutils-linux/testing 2.0.2-1~bpo10+1 arm64
command-line tools to manage OpenZFS filesystems
zfsutils-linux-dbgsym/testing 2.0.2-1~bpo10+1 arm64
debug symbols for zfsutils-linux
Yay, the following works!
$ sudo apt install zfs-dkms zfsutils-linux
...
Setting up zfs-zed (2.0.2-1~bpo10+1) ...
Created symlink /etc/systemd/system/zed.service → /lib/systemd/system/zfs-zed.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-zed.service → /lib/systemd/system/zfs-zed.service.
Processing triggers for systemd (241-7~deb10u8) ...
pi@taco:~ $ sudo modinfo zfs | grep version
version: 2.0.2-1~bpo10+1
srcversion: E7C034666B080B5D345AF7C
vermagic: 5.10.63-v8+ SMP preempt mod_unload modversions aarch64
pi@taco:~ $ dmesg | grep ZFS
[ 5393.504988] ZFS: Loaded module v2.0.2-1~bpo10+1, ZFS pool version 5000, ZFS filesystem version 5
I set up a RAIDZ1 zpool after setting up ZFS, and benchmarked it.
ZFS RAIDZ1 Results:
pi@taco:~ $ sudo ./iozone3_492/src/current/iozone -e -I -a -s 5000M -r 1024k -i 0 -i 2 -f /zfspool/iozone
pi@taco:~ $ sudo ./iozone3_492/src/current/iozone -e -I -a -s 500M -r 4k -i 0 -i 2 -f /zfspool/iozone
(Note: I had to disable the fio test as it's set up for direct device access and ZFS was having none of that. I should probably try to figure out a good fio
setup for ZFS zpools... Also, I had to bump the file sizes to make sure ZFS caches didn't make the results look insane (with smaller files ZFS was showing 1.2 GB/sec for the large block reads!)).
Benchmark | Result |
---|---|
fio 1M sequential read | N/A |
iozone 1M random read | 299 MB/s |
iozone 1M random write | 289 MB/s |
iozone 4K random read | 50.06 MB/s |
iozone 4K random write | 10.88 MB/s |
Interestingly, ZFS seemed to do some more caching than the other filesystems, resulting in more performance when you could fit everything being copied in RAM. Not sure what kind of magic that entails... and I'm a bit of a newb when it comes to ZFS anyways.
ZFS does seem to perform slower than normal mdadm
RAID for large file copies, but is actually a bit faster for small random activity. So... nice? I'll check the network share performance next.
Next I'm going to test Samba and NFS performance to/from my Mac. I set up a Samba share following my simple guide.
Benchmark methodology:
# All run on Mac - set size of file to double Raspberry Pi's RAM:
mkfile 8G test.zip && \
rsync -h --stats test.zip /Volumes/shared/test.zip && \
rm -f test.zip && sleep 60 && \
rsync -h --stats /Volumes/shared/test.zip test.zip && \
rm -f /Volumes/shared/test.zip && \
rm -f test.zip
IMPORTANT CAVEAT: Due to some weird networking issues on my Windows PC, I had to run these benchmarks on my Mac. For some reason, I seemed to be getting some inconsistent results in my benchmarks using
rsync
, and there's no easy way to time Finder copies, so I've tried my best, but I'd take these results with a grain of salt. I ran the same tests to/from my 2.5G ASUSTOR NAS and got 163 MB/sec to it, and 197.52 MB/sec from it, which is close to normal... but a little slower than in the Finder. So these results are accurate—to a point.One thing I noticed a lot on the Pi is a high percentage of IRQ utilization, 20-40% at times. I was monitoring with
irqtop
,dstat
, andwatch -n0.1 --no-title cat /proc/interrupts
and observed lots of interrupts on both the NIC andahci
(the SATA driver). I can't find any other tools to monitor PCIe throughput on the Pi (most tools seem oriented towards Intel or motherboards with configurable BIOSes).Potentially (but probably not) related: https://github.com/raspberrypi/linux/issues/4666
Single drive NVMe:
Source/destination | Samba | NFS |
---|---|---|
From Taco to Mac | 129.20 MB/sec | 86.06 MB/sec |
From Mac to Taco | 92.89 MB/sec | 64.36 MB/sec |
In RAID 0:
Source/destination | Samba | NFS |
---|---|---|
From Taco to Mac | 152.07 MB/sec | 98.19 MB/sec |
From Mac to Taco | 89.97 MB/sec | 91.89 MB/sec |
In RAID 5:
Source/destination | Samba | NFS |
---|---|---|
From Taco to Mac | 125.43 MB/sec | 87.23 MB/sec |
From Mac to Taco | 81.44 MB/sec | 55.97 MB/sec |
In RAIDZ1:
Source/destination | Samba | NFS |
---|---|---|
From Taco to Mac | 112.31 MB/sec | 99.33 MB/sec |
From Mac to Taco | 87.23 MB/sec | 22.46 MB/sec |
NOTE:
rsync
that ships with macOS is absurdly slow (it is a bit older than the latest, too). Finder copies were going like 4x faster than CLI copies withrsync
. So I made sure to upgrade to a later version withbrew install rsync
.
Any ideas why the rsync performance might be so slow, compared to direct disk access?
Plenty of older CPUs struggle with modern ssh ciphers due to lack of acceleration, so for connecting to my "trash NAS" (sporting a Phenom II X4 965 ) I use the follwoing option passed to rsync:
-e 'ssh -c chacha20-poly1305@openssh.com'
Here's a bash script from someone else to test transfer speeds of different ssh ciphers and here is a blog post with some benchmarks (coincidentially on "Raspberry Pi 3B+" and "Mac mini 2018 Core i7-8700B").
I don't think you'd need to test this for all RAID configurations - if this cipher overhead is a problem for one, then it is a problem for all :smile: .
@markwort - I'm using rsync
here not to copy between a remote server and local, but rather local to a locally-mounted volume. So the transfers are going across the network, and I've also verified (as close as I can) that direct Finder copies are showing the same (or at least similar, ±10%) performance.
@geerlingguy sorry for not seeing the forest for the trees, it must have been too early for me. I thought I had something useful to contribute, and was too busy lining it out and didn't take a closer look at what exactly you're actually measuring!
I also just tested the power button functionality—after you shut down the Pi, assuming it's still plugged in, you can press the power button and it will boot back up.
Note that when powered down, it seems the PCIe circuits may still have power, because the green LED on the board and the blue LEDs for the SATA drives are all lit up.
Also, when soft-powering this way, it seems like the SATA drives didn't come up normally; maybe the PCIe bus link was down :/ — a hard power off/on cycle seemed to clear everything up.
Power consumption (as measured by Kill-A-Watt):
11.2W
stress-ng --cpu 4
: 18.5W
(peaking around 20W
)8.0W
(seems like the PCIe devices still remain active; NIC goes into low power state, SATA drives remain powered up)As a point of comparison, my ASUSTOR Lockerstor 4 is pretty similar on the low end—about 10W idle (with no drives powered up)—but reaches up to 20-40W in operation when doing heavy reads/writes to all four HDDs.
And just to verify, I also plugged one of my older SATA spinny 3.5" hard drives in using a combo SATA power/data extension cable, and it comes up quite nicely:
pi@taco:~ $ sudo hdparm -I /dev/sda
/dev/sda:
ATA device, with non-removable media
Model Number: WDC WD5000AVDS-63U7B1
Serial Number: WD-WCAV9Y717334
Firmware Revision: 01.00A01
Transport: Serial, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6
Standards:
Supported: 8 7 6 5
Likely used: 8
Alrighty, I think I've tested this board to death. Working on a script for a video now, and hopefully I'll have that up next week! Feel free to continue any discussions in this issue! I should have those new 10G transceivers in today :)
I set up a 4 drive, striped + mirrored zpool with recordsize=1M on my hacked together build with a 8GB CM4 pictured below.
I'm seeing some wild numbers using your iozone
tests: 900+ MB/s 1M reads/writes even with the file size set to 20 GB and 4-5 MB/s 4K reads/writes, so I don't know what's going on there.
I also ran your rsync
test using a 16GB file from my Mac mini (with 10GbE) to the 2.5 GbE port on the Taco with more reasonable results:
I'd be happy to run some more tests if you'd like; I have another 8TB hard drive I can add to the array. However, its looking like PCIe 2.0 lane is going to be the real bottleneck even for spinny drives.
About the build... I know the PSU is probably unnecessary, but I didn't want to buy $35 in cables only to find out the Taco couldn't drive 4-5 spinny hard drives.
@andyattebery So with stripe + mirror, I can definitely see the writes being a bit faster. You lose out on another drive's worth of storage and have to stick with an even number, but I didn't even think about trying RAID10 or striped mirror zpool in my tests.
For the iozone tests, I kept getting wild numbers with lower total sizes so kept increasing until the numbers stabilized (and seemed more rational). Before increasing to 5000M
for the 1M random test and 500M
for the 4K test, I was getting numbers anywhere from 100-1200 MB/sec, which seemed a bit wild, but must've been based on ram caches.
I think once you get past a certain threshold the RAM cache mechanism breaks down and you see what ZFS is actually able to do on the disks.
I'm wondering if a striped mirror in ZFS really can put through more data than the bus would indicate? More testing required... As I'll mention in my video next week, I'm very new to ZFS and it seems a lot more complicated than the filesystems and RAID setups I'm used to!
If you want a quick way to pretty much disable all of the fancy ARC caching that ZFS does, you can change the primarycache
setting.
Get the current values:
root@host:~# zfs get primarycache
NAME PROPERTY VALUE SOURCE
data primarycache all local
change the value:
root@host:~# zfs set primarycache=metadata data
The other available settings are all (=cache files and metadata) and none. I think allowing ZFS to keep the metadata cache is fair.
AFAICT, when using a ZFS filesystem, the ARC should be the only thing between you and the disks, so there should not be any additional buffering happening in the kernel.
Since I think it might be acceptable to add r8125 support to the Pi kernel to make these common 2.5G adapters work out of the box, I've opened an issue upstream: https://github.com/raspberrypi/linux/issues/4699
So this is cool... apparently support was added recently, so if you want an even quicker fix, you can run sudo rpi-update
and it'll work out of the box. Hopefully that makes its way to stable firmware soon so it'll be available everywhere!
I can't get my board to detect my microSD/ SATA SSD. The boot always fails with this output:
USB xHC init failed
SD: card not detected
Any ideas? Do you have to set a jumper or something to set the boot device?
@iandk - Do you mean you're trying to boot with a USB hard drive, and not microSD? Or you're trying to boot of a microSD card with Pi OS on it? It's not clear from your comment what you're trying to do—and are you doing this on a Taco board or some other CM4 board?
I’ve tried booting off a microSD as well as a SATA SSD which I connected directly to the SATA slots. Both had the newest version of PiOS installed. Yea, I’m testing a Taco Board
Is the CM4 you're using a Lite version or one with eMMC installed? If using an eMMC compute module, it will never allow any kind of microSD card boot.
SATA boot support is not implemented on the Raspberry Pi currently; see https://github.com/raspberrypi/firmware/issues/1653
oh, I thought I had the lite version without emmc but just checked it indeed has emmc.
That means I probably have to get another board which allows me to flash PiOS the emmc flash?
@iandk - Yeah, it looks like (afaict so far) the Taco doesn't have the ability to boot the Pi into usbboot/rpiboot and allow flashing eMMC compute modules.
yeah, this is the issue we did not consider for the first version. Now for the new hw revision, we added a small button for the usbboot.
Video is live! https://www.youtube.com/watch?v=G_px298IF2k
Video is live!
Thx. Where did you buy the Radxa Taco board and for how much?
See original issue: https://github.com/geerlingguy/raspberry-pi-pcie-devices/issues/202
I have a Taco (well, the Penta main board that goes inside) and would like to do some testing on it; run some benchmarks, test compiling ZFS, etc.
Things to test: