Open geerlingguy opened 1 month ago
Thanks to @Sknashville@theatl.social on Mastodon for making me aware of the Max's existence. Will be interesting to see if this board is more of the goldilocks that Radxa was trying to thread with the X4...
@geerlingguy Very interesting - I've been looking for an OP5 Max benchmark like this. The main thing that stands out to me is the speed of NVMe compared with the first OP5 (which isn't surprising considering the PCIe / lanes on the Max). This is what I got on my OP5:
Benchmark | Result |
---|---|
iozone 4K random read | 50.70 MB/s |
iozone 4K random write | 123.35 MB/s |
iozone 1M random read | 363.86 MB/s |
iozone 1M random write | 366.33 MB/s |
iozone 1M sequential read | 368.94 MB/s |
iozone 1M sequential write | 372.99 MB/s |
Your glmark2-es2-wayland result looks quite low ... but I imagine this is due to the OS and drivers installed. On my OP5 I've got a score of 4800 (Joshua Riek's Ubuntu 22.04, running CPU and GPU at Performance and PAN_MESA_DEBUG=gofaster).
Many thanks for the data on the NVMe speed!!
I also experienced the slow RX bandwidth on RTL8125 by default.
I was able to get full speed (2.35 Gbps) by moving the irq handler to the big A76 cores
ls /proc/irq/125/enP3p49s0
echo f0 > /proc/irq/125/smp_affinitiy
So it's not a limitation of the hardware.
Basic information
Linux/system information
Benchmark results
CPU
Power
stress-ng --matrix 0
): 10.9 Wtop500
HPL benchmark: 12.8 WDisk
SanDisk Ultra 32GB A1 microSD card
MakerDisk NVMe 2280 M.2 512GB SSD
Run benchmark on any attached storage device (e.g. eMMC, microSD, NVMe, SATA) and add results under an additional heading.
Also consider running PiBenchmarks.com script.
Network
iperf3
results:Built-in 2.5 Gbps Ethernet (Realtek RTL8125 rev 05)
iperf3 -c $SERVER_IP
: 2.35 Gbpsiperf3 -c $SERVER_IP --reverse
: 773 Mbpsiperf3 -c $SERVER_IP --bidir
: 2.33 Gbps up, 362 Mbps downBuilt-in WiFi (Synaptics AP6611S)
iperf3 -c $SERVER_IP
: 297Mbpsiperf3 -c $SERVER_IP --reverse
: 176 Mbpsiperf3 -c $SERVER_IP --bidir
: 237 Mbps up, 38 Mbps downGPU
glmark2-es2
/glmark2-es2-wayland
results:Note: This benchmark requires an active display on the device. Not all devices may be able to run
glmark2-es2
, so in that case, make a note and move on!TODO: See this issue for discussion about a full suite of standardized GPU benchmarks.
Memory
tinymembench
results:Click to expand memory benchmark result
``` tinymembench v0.4.10 (simple benchmark for memory throughput and latency) ========================================================================== == Memory bandwidth tests == == == == Note 1: 1MB = 1000000 bytes == == Note 2: Results for 'copy' tests show how many bytes can be == == copied per second (adding together read and writen == == bytes would have provided twice higher numbers) == == Note 3: 2-pass copy means that we are using a small temporary buffer == == to first fetch data into it, and only then write it to the == == destination (source -> L1 cache, L1 cache -> destination) == == Note 4: If sample standard deviation exceeds 0.1%, it is shown in == == brackets == ========================================================================== C copy backwards : 11592.7 MB/s (5.9%) C copy backwards (32 byte blocks) : 11590.3 MB/s (0.2%) C copy backwards (64 byte blocks) : 11599.0 MB/s (0.5%) C copy : 11734.1 MB/s (0.2%) C copy prefetched (32 bytes step) : 12359.8 MB/s (0.2%) C copy prefetched (64 bytes step) : 12244.9 MB/s (0.3%) C 2-pass copy : 4635.4 MB/s C 2-pass copy prefetched (32 bytes step) : 7781.0 MB/s C 2-pass copy prefetched (64 bytes step) : 8196.3 MB/s C fill : 27981.7 MB/s (0.4%) C fill (shuffle within 16 byte blocks) : 28023.4 MB/s (0.3%) C fill (shuffle within 32 byte blocks) : 28033.7 MB/s (0.3%) C fill (shuffle within 64 byte blocks) : 28004.2 MB/s (0.3%) NEON 64x2 COPY : 12093.1 MB/s NEON 64x2x4 COPY : 12004.0 MB/s (0.1%) NEON 64x1x4_x2 COPY : 4554.4 MB/s (1.5%) NEON 64x2 COPY prefetch x2 : 10658.2 MB/s (0.2%) NEON 64x2x4 COPY prefetch x1 : 11332.3 MB/s (0.2%) NEON 64x2 COPY prefetch x1 : 10819.3 MB/s NEON 64x2x4 COPY prefetch x1 : 11340.1 MB/s (0.2%) --- standard memcpy : 12093.2 MB/s standard memset : 28018.2 MB/s (0.3%) --- NEON LDP/STP copy : 12069.4 MB/s NEON LDP/STP copy pldl2strm (32 bytes step) : 12648.6 MB/s NEON LDP/STP copy pldl2strm (64 bytes step) : 12627.3 MB/s NEON LDP/STP copy pldl1keep (32 bytes step) : 12264.5 MB/s NEON LDP/STP copy pldl1keep (64 bytes step) : 12252.8 MB/s NEON LD1/ST1 copy : 12005.7 MB/s (0.2%) NEON STP fill : 27941.7 MB/s NEON STNP fill : 27952.9 MB/s ARM LDP/STP copy : 12078.6 MB/s (0.1%) ARM STP fill : 28011.7 MB/s (0.3%) ARM STNP fill : 28026.2 MB/s (0.2%) ========================================================================== == Framebuffer read tests. == == == == Many ARM devices use a part of the system memory as the framebuffer, == == typically mapped as uncached but with write-combining enabled. == == Writes to such framebuffers are quite fast, but reads are much == == slower and very sensitive to the alignment and the selection of == == CPU instructions which are used for accessing memory. == == == == Many x86 systems allocate the framebuffer in the GPU memory, == == accessible for the CPU via a relatively slow PCI-E bus. Moreover, == == PCI-E is asymmetric and handles reads a lot worse than writes. == == == == If uncached framebuffer reads are reasonably fast (at least 100 MB/s == == or preferably >300 MB/s), then using the shadow framebuffer layer == == is not necessary in Xorg DDX drivers, resulting in a nice overall == == performance improvement. For example, the xf86-video-fbturbo DDX == == uses this trick. == ========================================================================== NEON LDP/STP copy (from framebuffer) : 1553.6 MB/s (18.1%) NEON LDP/STP 2-pass copy (from framebuffer) : 657.2 MB/s NEON LD1/ST1 copy (from framebuffer) : 785.7 MB/s NEON LD1/ST1 2-pass copy (from framebuffer) : 672.2 MB/s ARM LDP/STP copy (from framebuffer) : 771.4 MB/s (0.1%) ARM LDP/STP 2-pass copy (from framebuffer) : 669.8 MB/s (0.3%) ========================================================================== == Memory latency test == == == == Average time is measured for random memory accesses in the buffers == == of different sizes. The larger is the buffer, the more significant == == are relative contributions of TLB, L1/L2 cache misses and SDRAM == == accesses. For extremely large buffer sizes we are expecting to see == == page table walk with several requests to SDRAM for almost every == == memory access (though 64MiB is not nearly large enough to experience == == this effect to its fullest). == == == == Note 1: All the numbers are representing extra time, which needs to == == be added to L1 cache latency. The cycle timings for L1 cache == == latency can be usually found in the processor documentation. == == Note 2: Dual random read means that we are simultaneously performing == == two independent memory accesses at a time. In the case if == == the memory subsystem can't handle multiple outstanding == == requests, dual random read has the same timings as two == == single reads performed one after another. == ========================================================================== block size : single random read / dual random read 1024 : 0.0 ns / 0.0 ns 2048 : 0.0 ns / 0.0 ns 4096 : 0.0 ns / 0.0 ns 8192 : 0.0 ns / 0.0 ns 16384 : 0.0 ns / 0.0 ns 32768 : 0.0 ns / 0.0 ns 65536 : 0.0 ns / 0.0 ns 131072 : 1.1 ns / 1.5 ns 262144 : 2.3 ns / 2.9 ns 524288 : 3.5 ns / 4.0 ns 1048576 : 10.0 ns / 13.1 ns 2097152 : 13.9 ns / 15.7 ns 4194304 : 62.3 ns / 100.7 ns 8388608 : 157.9 ns / 221.3 ns 16777216 : 210.2 ns / 259.7 ns 33554432 : 236.5 ns / 272.7 ns 67108864 : 249.7 ns / 278.7 ns ```sbc-bench
resultsRun sbc-bench and paste a link to the results here: https://0x0.st/XyIw.bin (https://github.com/ThomasKaiser/sbc-bench/issues/100)
Phoronix Test Suite
Results from pi-general-benchmark.sh: